url
stringlengths
25
141
content
stringlengths
2.14k
402k
https://js.langchain.com/v0.1/docs/integrations/vectorstores/rockset/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Rockset On this page Rockset ======= [Rockset](https://rockset.com) is a real-time analyitics SQL database that runs in the cloud. Rockset provides vector search capabilities, in the form of [SQL functions](https://rockset.com/docs/vector-functions/#vector-distance-functions), to support AI applications that rely on text similarity. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the rockset client. yarn add @rockset/client ### Usage[​](#usage "Direct link to Usage") tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Below is an example showcasing how to use OpenAI and Rockset to answer questions about a text file: import * as rockset from "@rockset/client";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RocksetStore } from "@langchain/community/vectorstores/rockset";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { readFileSync } from "fs";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";const store = await RocksetStore.withNewCollection(new OpenAIEmbeddings(), { client: rockset.default.default( process.env.ROCKSET_API_KEY ?? "", `https://api.${process.env.ROCKSET_API_REGION ?? "usw2a1"}.rockset.com` ), collectionName: "langchain_demo",});const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: store.asRetriever(), combineDocsChain,});const text = readFileSync("state_of_the_union.txt", "utf8");const docs = await new RecursiveCharacterTextSplitter().createDocuments([text]);await store.addDocuments(docs);const response = await chain.invoke({ input: "When was America founded?",});console.log(response.answer);await store.destroy(); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RocksetStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_rockset.RocksetStore.html) from `@langchain/community/vectorstores/rockset` * [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * * * #### Help us out by providing feedback on this documentation page: [ Previous Redis ](/v0.1/docs/integrations/vectorstores/redis/)[ Next SingleStore ](/v0.1/docs/integrations/vectorstores/singlestore/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/redis/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Redis On this page Redis ===== [Redis](https://redis.io/) is a fast open source, in-memory data store. As part of the [Redis Stack](https://redis.io/docs/stack/get-started/), [RediSearch](https://redis.io/docs/stack/search/) is the module that enables vector similarity semantic search, as well as many other types of searching. Compatibility Only available on Node.js. LangChain.js accepts [node-redis](https://github.com/redis/node-redis) as the client for Redis vectorstore. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Run Redis with Docker on your computer following [the docs](https://redis.io/docs/stack/get-started/install/docker/#redisredis-stack) 2. Install the node-redis JS client * npm * Yarn * pnpm npm install -S redis yarn add redis pnpm add redis tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Index docs[​](#index-docs "Direct link to Index docs") ------------------------------------------------------ import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Query docs[​](#query-docs "Direct link to Query docs") ------------------------------------------------------ import { createClient } from "redis";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const vectorStore = new RedisVectorStore(new OpenAIEmbeddings(), { redisClient: client, indexName: "docs",});/* Simple standalone search in the vector DB */const simpleRes = await vectorStore.similaritySearch("redis", 1);console.log(simpleRes);/*[ Document { pageContent: "redis is fast", metadata: { foo: "bar" } }]*//* Search in the vector DB using filters */const filterRes = await vectorStore.similaritySearch("redis", 3, ["qux"]);console.log(filterRes);/*[ Document { pageContent: "consectetur adipiscing elit", metadata: { baz: "qux" }, }, Document { pageContent: "lorem ipsum dolor sit amet", metadata: { baz: "qux" }, }]*//* Usage as part of a chain */const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106" });const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const chainRes = await chain.invoke({ input: "What did the fox do?" });console.log(chainRes);/* { input: 'What did the fox do?', chat_history: [], context: [ Document { pageContent: 'the quick brown fox jumped over the lazy dog', metadata: [Object] }, Document { pageContent: 'lorem ipsum dolor sit amet', metadata: [Object] }, Document { pageContent: 'consectetur adipiscing elit', metadata: [Object] }, Document { pageContent: 'redis is fast', metadata: [Object] } ], answer: 'The fox jumped over the lazy dog.' }*/await client.disconnect(); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` Create index with options[​](#create-index-with-options "Direct link to Create index with options") --------------------------------------------------------------------------------------------------- To pass arguments for [index creation](https://redis.io/commands/ft.create/), you can utilize the [available options](https://github.com/redis/node-redis/blob/294cbf8367295ac81cbe51ce2932493ab80493f1/packages/search/lib/commands/CREATE.ts#L4) offered by [node-redis](https://github.com/redis/node-redis) through `createIndexOptions` parameter. import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", createIndexOptions: { TEMPORARY: 1000, }, });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` Delete an index[​](#delete-an-index "Direct link to Delete an index") --------------------------------------------------------------------- import { createClient } from "redis";import { OpenAIEmbeddings } from "@langchain/openai";import { RedisVectorStore } from "@langchain/redis";import { Document } from "@langchain/core/documents";const client = createClient({ url: process.env.REDIS_URL ?? "redis://localhost:6379",});await client.connect();const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "redis is fast", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "consectetur adipiscing elit", }),];const vectorStore = await RedisVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { redisClient: client, indexName: "docs", });await vectorStore.delete({ deleteAll: true });await client.disconnect(); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [RedisVectorStore](https://api.js.langchain.com/classes/langchain_redis.RedisVectorStore.html) from `@langchain/redis` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * * * #### Help us out by providing feedback on this documentation page: [ Previous Qdrant ](/v0.1/docs/integrations/vectorstores/qdrant/)[ Next Rockset ](/v0.1/docs/integrations/vectorstores/rockset/) * [Setup](#setup) * [Index docs](#index-docs) * [Query docs](#query-docs) * [Create index with options](#create-index-with-options) * [Delete an index](#delete-an-index) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/qdrant/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Qdrant On this page Qdrant ====== [Qdrant](https://qdrant.tech/) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Run a Qdrant instance with Docker on your computer by following the [Qdrant setup instructions](https://qdrant.tech/documentation/quick-start/). 2. Install the Qdrant Node.js SDK. * npm * Yarn * pnpm npm install -S @langchain/qdrant yarn add @langchain/qdrant pnpm add @langchain/qdrant 3. Setup Env variables for Qdrant before running the code export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport QDRANT_URL=YOUR_QDRANT_URL_HERE # for example http://localhost:6333 Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Create a new index from texts[​](#create-a-new-index-from-texts "Direct link to Create a new index from texts") tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";// text sample from Godel, Escher, Bachconst vectorStore = await QdrantVectorStore.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious LittleHarmonic Labyrinth of the dreaded Majotaur?`, `Achilles: Yiikes! What is that?`, `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, `Achilles: Oh, no!`, `Tortoise: But it's only a myth. Courage, Achilles.`, ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/ #### API Reference: * [QdrantVectorStore](https://api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` ### Create a new index from docs[​](#create-a-new-index-from-docs "Direct link to Create a new index from docs") import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();const vectorStore = await QdrantVectorStore.fromDocuments( docs, new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "a_test_collection", });// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/ #### API Reference: * [QdrantVectorStore](https://api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text` ### Query docs from existing collection[​](#query-docs-from-existing-collection "Direct link to Query docs from existing collection") import { QdrantVectorStore } from "@langchain/qdrant";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await QdrantVectorStore.fromExistingCollection( new OpenAIEmbeddings(), { url: process.env.QDRANT_URL, collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/ #### API Reference: * [QdrantVectorStore](https://api.js.langchain.com/classes/langchain_qdrant.QdrantVectorStore.html) from `@langchain/qdrant` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * * * #### Help us out by providing feedback on this documentation page: [ Previous Prisma ](/v0.1/docs/integrations/vectorstores/prisma/)[ Next Redis ](/v0.1/docs/integrations/vectorstores/redis/) * [Setup](#setup) * [Usage](#usage) * [Create a new index from texts](#create-a-new-index-from-texts) * [Create a new index from docs](#create-a-new-index-from-docs) * [Query docs from existing collection](#query-docs-from-existing-collection) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/prisma/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Prisma On this page Prisma ====== For augmenting existing models in PostgreSQL database with vector search, Langchain supports using [Prisma](https://www.prisma.io/) together with PostgreSQL and [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Setup database instance with Supabase[​](#setup-database-instance-with-supabase "Direct link to Setup database instance with Supabase") Refer to the [Prisma and Supabase integration guide](https://supabase.com/docs/guides/integrations/prisma) to setup a new database instance with Supabase and Prisma. ### Install Prisma[​](#install-prisma "Direct link to Install Prisma") * npm * Yarn * pnpm npm install prisma yarn add prisma pnpm add prisma ### Setup `pgvector` self hosted instance with `docker-compose`[​](#setup-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-pgvector-self-hosted-instance-with-docker-compose") `pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. services: db: image: ankane/pgvector ports: - 5432:5432 volumes: - db:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD= - POSTGRES_USER= - POSTGRES_DB=volumes: db: ### Create a new schema[​](#create-a-new-schema "Direct link to Create a new schema") Assuming you haven't created a schema yet, create a new model with a `vector` field of type `Unsupported("vector")`: model Document { id String @id @default(cuid()) content String vector Unsupported("vector")?} Afterwards, create a new migration with `--create-only` to avoid running the migration directly. * npm * Yarn * pnpm npx prisma migrate dev --create-only npx prisma migrate dev --create-only npx prisma migrate dev --create-only Add the following line to the newly created migration to enable `pgvector` extension if it hasn't been enabled yet: CREATE EXTENSION IF NOT EXISTS vector; Run the migration afterwards: * npm * Yarn * pnpm npx prisma migrate dev npx prisma migrate dev npx prisma migrate dev Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community danger Table names and column names (in fields such as `tableName`, `vectorColumnName`, `columns` and `filter`) are passed into SQL queries directly without parametrisation. These fields must be sanitized beforehand to avoid SQL injection. import { PrismaVectorStore } from "@langchain/community/vectorstores/prisma";import { OpenAIEmbeddings } from "@langchain/openai";import { PrismaClient, Prisma, Document } from "@prisma/client";export const run = async () => { const db = new PrismaClient(); // Use the `withModel` method to get proper type hints for `metadata` field: const vectorStore = PrismaVectorStore.withModel<Document>(db).create( new OpenAIEmbeddings(), { prisma: Prisma, tableName: "Document", vectorColumnName: "vector", columns: { id: PrismaVectorStore.IdColumn, content: PrismaVectorStore.ContentColumn, }, } ); const texts = ["Hello world", "Bye bye", "What's this?"]; await vectorStore.addModels( await db.$transaction( texts.map((content) => db.document.create({ data: { content } })) ) ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne); // create an instance with default filter const vectorStore2 = PrismaVectorStore.withModel<Document>(db).create( new OpenAIEmbeddings(), { prisma: Prisma, tableName: "Document", vectorColumnName: "vector", columns: { id: PrismaVectorStore.IdColumn, content: PrismaVectorStore.ContentColumn, }, filter: { content: { equals: "default", }, }, } ); await vectorStore2.addModels( await db.$transaction( texts.map((content) => db.document.create({ data: { content } })) ) ); // Use the default filter a.k.a {"content": "default"} const resultTwo = await vectorStore.similaritySearch("Hello world", 1); console.log(resultTwo);}; #### API Reference: * [PrismaVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_prisma.PrismaVectorStore.html) from `@langchain/community/vectorstores/prisma` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` The following SQL operators are available as filters: `equals`, `in`, `isNull`, `isNotNull`, `like`, `lt`, `lte`, `gt`, `gte`, `not`. The samples above uses the following schema: // This is your Prisma schema file,// learn more about it in the docs: https://pris.ly/d/prisma-schemagenerator client { provider = "prisma-client-js"}datasource db { provider = "postgresql" url = env("DATABASE_URL")}model Document { id String @id @default(cuid()) content String namespace String? @default("default") vector Unsupported("vector")?} #### API Reference: You can remove `namespace` if you don't need it. * * * #### Help us out by providing feedback on this documentation page: [ Previous Pinecone ](/v0.1/docs/integrations/vectorstores/pinecone/)[ Next Qdrant ](/v0.1/docs/integrations/vectorstores/qdrant/) * [Setup](#setup) * [Setup database instance with Supabase](#setup-database-instance-with-supabase) * [Install Prisma](#install-prisma) * [Setup `pgvector` self hosted instance with `docker-compose`](#setup-pgvector-self-hosted-instance-with-docker-compose) * [Create a new schema](#create-a-new-schema) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/turbopuffer/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Turbopuffer On this page Turbopuffer =========== Setup[​](#setup "Direct link to Setup") --------------------------------------- First you must sign up for a Turbopuffer account [here](https://turbopuffer.com/join). Then, once you have an account you can create an API key. Set your API key as an environment variable: export TURBOPUFFER_API_KEY=<YOUR_API_KEY> Usage[​](#usage "Direct link to Usage") --------------------------------------- Here are some examples of how to use the class. You can filter your queries by previous specified metadata, but keep in mind that currently only string values are supported. See [here for more information](https://turbopuffer.com/docs/reference/query#filter-parameters) on acceptable filter formats. import { OpenAIEmbeddings } from "@langchain/openai";import { TurbopufferVectorStore } from "@langchain/community/vectorstores/turbopuffer";const embeddings = new OpenAIEmbeddings();const store = new TurbopufferVectorStore(embeddings, { apiKey: process.env.TURBOPUFFER_API_KEY, namespace: "my-namespace",});const createdAt = new Date().getTime();// Add some documents to your store.// Currently, only string metadata values are supported.const ids = await store.addDocuments([ { pageContent: "some content", metadata: { created_at: createdAt.toString() }, }, { pageContent: "hi", metadata: { created_at: (createdAt + 1).toString() } }, { pageContent: "bye", metadata: { created_at: (createdAt + 2).toString() } }, { pageContent: "what's this", metadata: { created_at: (createdAt + 3).toString() }, },]);// Retrieve documents from the storeconst results = await store.similaritySearch("hello", 1);console.log(results);/* [ Document { pageContent: 'hi', metadata: { created_at: '1705519164987' } } ]*/// Filter by metadata// See https://turbopuffer.com/docs/reference/query#filter-parameters for more on// allowed filtersconst results2 = await store.similaritySearch("hello", 1, { created_at: [["Eq", (createdAt + 3).toString()]],});console.log(results2);/* [ Document { pageContent: "what's this", metadata: { created_at: '1705519164989' } } ]*/// Upsert by passing idsawait store.addDocuments( [ { pageContent: "changed", metadata: { created_at: createdAt.toString() } }, { pageContent: "hi changed", metadata: { created_at: (createdAt + 1).toString() }, }, { pageContent: "bye changed", metadata: { created_at: (createdAt + 2).toString() }, }, { pageContent: "what's this changed", metadata: { created_at: (createdAt + 3).toString() }, }, ], { ids });// Filter by metadataconst results3 = await store.similaritySearch("hello", 10, { created_at: [["Eq", (createdAt + 3).toString()]],});console.log(results3);/* [ Document { pageContent: "what's this changed", metadata: { created_at: '1705519164989' } } ]*/// Remove all vectors from the namespace.await store.delete({ deleteIndex: true,}); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TurbopufferVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_turbopuffer.TurbopufferVectorStore.html) from `@langchain/community/vectorstores/turbopuffer` * * * #### Help us out by providing feedback on this documentation page: [ Previous Tigris ](/v0.1/docs/integrations/vectorstores/tigris/)[ Next TypeORM ](/v0.1/docs/integrations/vectorstores/typeorm/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/tigris/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Tigris On this page Tigris ====== Tigris makes it easy to build AI applications with vector embeddings. It is a fully managed cloud-native database that allows you store and index documents and vector embeddings for fast and scalable vector search. Compatibility Only available on Node.js. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### 1\. Install the Tigris SDK[​](#1-install-the-tigris-sdk "Direct link to 1. Install the Tigris SDK") Install the SDK as follows * npm * Yarn * pnpm npm install -S @tigrisdata/vector yarn add @tigrisdata/vector pnpm add @tigrisdata/vector ### 2\. Fetch Tigris API credentials[​](#2-fetch-tigris-api-credentials "Direct link to 2. Fetch Tigris API credentials") You can sign up for a free Tigris account [here](https://www.tigrisdata.com/). Once you have signed up for the Tigris account, create a new project called `vectordemo`. Next, make a note of the `clientId` and `clientSecret`, which you can get from the Application Keys section of the project. Index docs[​](#index-docs "Direct link to Index docs") ------------------------------------------------------ tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install -S @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { VectorDocumentStore } from "@tigrisdata/vector";import { Document } from "langchain/document";import { OpenAIEmbeddings } from "@langchain/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "tigris is a cloud-native vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "tigris is a river", }),];await TigrisVectorStore.fromDocuments(docs, new OpenAIEmbeddings(), { index }); Query docs[​](#query-docs "Direct link to Query docs") ------------------------------------------------------ import { VectorDocumentStore } from "@tigrisdata/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { TigrisVectorStore } from "langchain/vectorstores/tigris";const index = new VectorDocumentStore({ connection: { serverUrl: "api.preview.tigrisdata.cloud", projectName: process.env.TIGRIS_PROJECT, clientId: process.env.TIGRIS_CLIENT_ID, clientSecret: process.env.TIGRIS_CLIENT_SECRET, }, indexName: "examples_index", numDimensions: 1536, // match the OpenAI embedding size});const vectorStore = await TigrisVectorStore.fromExistingIndex( new OpenAIEmbeddings(), { index });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("tigris", 1, { "metadata.foo": "bar",});console.log(JSON.stringify(results, null, 2));/*[ Document { pageContent: 'tigris is a cloud-native vector db', metadata: { foo: 'bar' } }]*/ * * * #### Help us out by providing feedback on this documentation page: [ Previous Supabase ](/v0.1/docs/integrations/vectorstores/supabase/)[ Next Turbopuffer ](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [Setup](#setup) * [1\. Install the Tigris SDK](#1-install-the-tigris-sdk) * [2\. Fetch Tigris API credentials](#2-fetch-tigris-api-credentials) * [Index docs](#index-docs) * [Query docs](#query-docs) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/typesense/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Typesense On this page Typesense ========= Vector store that utilizes the Typesense search engine. ### Basic Usage[​](#basic-usage "Direct link to Basic Usage") tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { Typesense, TypesenseConfig } from "langchain/vectorstores/typesense";import { OpenAIEmbeddings } from "@langchain/openai";import { Client } from "typesense";import { Document } from "langchain/document";const vectorTypesenseClient = new Client({ nodes: [ { // Ideally should come from your .env file host: "...", port: 123, protocol: "https", }, ], // Ideally should come from your .env file apiKey: "...", numRetries: 3, connectionTimeoutSeconds: 60,});const typesenseVectorStoreConfig = { // Typesense client typesenseClient: vectorTypesenseClient, // Name of the collection to store the vectors in schemaName: "your_schema_name", // Optional column names to be used in Typesense columnNames: { // "vec" is the default name for the vector column in Typesense but you can change it to whatever you want vector: "vec", // "text" is the default name for the text column in Typesense but you can change it to whatever you want pageContent: "text", // Names of the columns that you will save in your typesense schema and need to be retrieved as metadata when searching metadataColumnNames: ["foo", "bar", "baz"], }, // Optional search parameters to be passed to Typesense when searching searchParams: { q: "*", filter_by: "foo:[fooo]", query_by: "", }, // You can override the default Typesense import function if you want to do something more complex // Default import function: // async importToTypesense< // T extends Record<string, unknown> = Record<string, unknown> // >(data: T[], collectionName: string) { // const chunkSize = 2000; // for (let i = 0; i < data.length; i += chunkSize) { // const chunk = data.slice(i, i + chunkSize); // await this.caller.call(async () => { // await this.client // .collections<T>(collectionName) // .documents() // .import(chunk, { action: "emplace", dirty_values: "drop" }); // }); // } // } import: async (data, collectionName) => { await vectorTypesenseClient .collections(collectionName) .documents() .import(data, { action: "emplace", dirty_values: "drop" }); },} satisfies TypesenseConfig;/** * Creates a Typesense vector store from a list of documents. * Will update documents if there is a document with the same id, at least with the default import function. * @param documents list of documents to create the vector store from * @returns Typesense vector store */const createVectorStoreWithTypesense = async (documents: Document[] = []) => Typesense.fromDocuments( documents, new OpenAIEmbeddings(), typesenseVectorStoreConfig );/** * Returns a Typesense vector store from an existing index. * @returns Typesense vector store */const getVectorStoreWithTypesense = async () => new Typesense(new OpenAIEmbeddings(), typesenseVectorStoreConfig);// Do a similarity searchconst vectorStore = await getVectorStoreWithTypesense();const documents = await vectorStore.similaritySearch("hello world");// Add filters based on metadata with the search parameters of Typesense// will exclude documents with author:JK Rowling, so if Joe Rowling & JK Rowling exists, only Joe Rowling will be returnedvectorStore.similaritySearch("Rowling", undefined, { filter_by: "author:!=JK Rowling",});// Delete a documentvectorStore.deleteDocuments(["document_id_1", "document_id_2"]); ### Constructor[​](#constructor "Direct link to Constructor") Before starting, create a schema in Typesense with an id, a field for the vector and a field for the text. Add as many other fields as needed for the metadata. * `constructor(embeddings: Embeddings, config: TypesenseConfig)`: Constructs a new instance of the `Typesense` class. * `embeddings`: An instance of the `Embeddings` class used for embedding documents. * `config`: Configuration object for the Typesense vector store. * `typesenseClient`: Typesense client instance. * `schemaName`: Name of the Typesense schema in which documents will be stored and searched. * `searchParams` (optional): Typesense search parameters. Default is `{ q: '*', per_page: 5, query_by: '' }`. * `columnNames` (optional): Column names configuration. * `vector` (optional): Vector column name. Default is `'vec'`. * `pageContent` (optional): Page content column name. Default is `'text'`. * `metadataColumnNames` (optional): Metadata column names. Default is an empty array `[]`. * `import` (optional): Replace the default import function for importing data to Typesense. This can affect the functionality of updating documents. ### Methods[​](#methods "Direct link to Methods") * `async addDocuments(documents: Document[]): Promise<void>`: Adds documents to the vector store. The documents will be updated if there is a document with the same ID. * `static async fromDocuments(docs: Document[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of documents. Documents are added to the vector store during construction. * `static async fromTexts(texts: string[], metadatas: object[], embeddings: Embeddings, config: TypesenseConfig): Promise<Typesense>`: Creates a Typesense vector store from a list of texts and associated metadata. Texts are converted to documents and added to the vector store during construction. * `async similaritySearch(query: string, k?: number, filter?: Record<string, unknown>): Promise<Document[]>`: Searches for similar documents based on a query. Returns an array of similar documents. * `async deleteDocuments(documentIds: string[]): Promise<void>`: Deletes documents from the vector store based on their IDs. * * * #### Help us out by providing feedback on this documentation page: [ Previous TypeORM ](/v0.1/docs/integrations/vectorstores/typeorm/)[ Next Upstash Vector ](/v0.1/docs/integrations/vectorstores/upstash/) * [Basic Usage](#basic-usage) * [Constructor](#constructor) * [Methods](#methods) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/typeorm/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * TypeORM On this page TypeORM ======= To enable vector search in a generic PostgreSQL database, LangChain.js supports using [TypeORM](https://typeorm.io/) with the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension. Setup[​](#setup "Direct link to Setup") --------------------------------------- To work with TypeORM, you need to install the `typeorm` and `pg` packages: * npm * Yarn * pnpm npm install typeorm yarn add typeorm pnpm add typeorm * npm * Yarn * pnpm npm install pg yarn add pg pnpm add pg tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Setup a `pgvector` self hosted instance with `docker-compose`[​](#setup-a-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-a-pgvector-self-hosted-instance-with-docker-compose") `pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. Create a file below named `docker-compose.yml`: export default {services:{db:{image:'ankane/pgvector',ports:['5432:5432'],volumes:['./data:/var/lib/postgresql/data'],environment:['POSTGRES_PASSWORD=ChangeMe','POSTGRES_USER=myuser','POSTGRES_DB=api']}}}; #### API Reference: And then in the same directory, run `docker compose up` to start the container. You can find more information on how to setup `pgvector` in the [official repository](https://github.com/pgvector/pgvector). Usage[​](#usage "Direct link to Usage") --------------------------------------- One complete example of using `TypeORMVectorStore` is the following: import { DataSourceOptions } from "typeorm";import { OpenAIEmbeddings } from "@langchain/openai";import { TypeORMVectorStore } from "@langchain/community/vectorstores/typeorm";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/typeormexport const run = async () => { const args = { postgresConnectionOptions: { type: "postgres", host: "localhost", port: 5432, username: "myuser", password: "ChangeMe", database: "api", } as DataSourceOptions, }; const typeormVectorStore = await TypeORMVectorStore.fromDataSource( new OpenAIEmbeddings(), args ); await typeormVectorStore.ensureTableInDatabase(); await typeormVectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } }, ]); const results = await typeormVectorStore.similaritySearch("hello", 2); console.log(results);}; #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [TypeORMVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_typeorm.TypeORMVectorStore.html) from `@langchain/community/vectorstores/typeorm` * * * #### Help us out by providing feedback on this documentation page: [ Previous Turbopuffer ](/v0.1/docs/integrations/vectorstores/turbopuffer/)[ Next Typesense ](/v0.1/docs/integrations/vectorstores/typesense/) * [Setup](#setup) * [Setup a `pgvector` self hosted instance with `docker-compose`](#setup-a-pgvector-self-hosted-instance-with-docker-compose) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/vectorstores/upstash/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Memory](/v0.1/docs/integrations/vectorstores/memory/) * [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) * [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/) * [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/) * [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/) * [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/) * [Chroma](/v0.1/docs/integrations/vectorstores/chroma/) * [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/) * [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/) * [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/) * [Convex](/v0.1/docs/integrations/vectorstores/convex/) * [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/) * [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/) * [Faiss](/v0.1/docs/integrations/vectorstores/faiss/) * [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/) * [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/) * [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) * [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) * [Milvus](/v0.1/docs/integrations/vectorstores/milvus/) * [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/) * [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/) * [MyScale](/v0.1/docs/integrations/vectorstores/myscale/) * [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/) * [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/) * [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/) * [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/) * [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/) * [Prisma](/v0.1/docs/integrations/vectorstores/prisma/) * [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/) * [Redis](/v0.1/docs/integrations/vectorstores/redis/) * [Rockset](/v0.1/docs/integrations/vectorstores/rockset/) * [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) * [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) * [Tigris](/v0.1/docs/integrations/vectorstores/tigris/) * [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/) * [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/) * [Typesense](/v0.1/docs/integrations/vectorstores/typesense/) * [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/) * [USearch](/v0.1/docs/integrations/vectorstores/usearch/) * [Vectara](/v0.1/docs/integrations/vectorstores/vectara/) * [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/) * [Voy](/v0.1/docs/integrations/vectorstores/voy/) * [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/) * [Xata](/v0.1/docs/integrations/vectorstores/xata/) * [Zep](/v0.1/docs/integrations/vectorstores/zep/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * Upstash Vector On this page Upstash Vector ============== Upstash Vector is a REST based serverless vector database, designed for working with vector embeddings. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. Create Upstash Vector Index You can create an index from [Upstash Console](https://console.upstash.com/vector). For further reference, see [docs](https://upstash.com/docs/vector/overall/getstarted). 2. Install Upstash Vector SDK. * npm * Yarn * pnpm npm install -S @upstash/vector yarn add @upstash/vector pnpm add @upstash/vector We use OpenAI for the embeddings of the below examples. However, you can also create the embeddings using the model of your choice, that is available in the LangChain. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Create Upstash Vector Client[​](#create-upstash-vector-client "Direct link to Create Upstash Vector Client") ------------------------------------------------------------------------------------------------------------ There are two ways to create the client. You can either pass the credentials as string manually from the `.env` file (or as string variables), or you can retrieve the credentials from the environment automatically. import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const embeddings = new OpenAIEmbeddings({});// Creating the index with the provided credentials.const indexWithCredentials = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const storeWithCredentials = new UpstashVectorStore(embeddings, { index: indexWithCredentials,});// Creating the index from the environment variables automatically.const indexFromEnv = new Index();const storeFromEnv = new UpstashVectorStore(embeddings, { index: indexFromEnv,}); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [UpstashVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` Index and Query Documents[​](#index-and-query-documents "Direct link to Index and Query Documents") --------------------------------------------------------------------------------------------------- You can index the LangChain documents with any model of your choice, and perform a search over these documents. It's possible to apply metadata filtering to the search results. See [the related docs here](https://upstash.com/docs/vector/features/filtering). import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const index = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const embeddings = new OpenAIEmbeddings({});const UpstashVector = new UpstashVectorStore(embeddings, { index });// Creating the docs to be indexed.const id = new Date().getTime();const documents = [ new Document({ metadata: { name: id }, pageContent: "Hello there!", }), new Document({ metadata: { name: id }, pageContent: "What are you building?", }), new Document({ metadata: { time: id }, pageContent: "Upstash Vector is great for building AI applications.", }), new Document({ metadata: { time: id }, pageContent: "To be, or not to be, that is the question.", }),];// Creating embeddings from the provided documents, and adding them to Upstash database.await UpstashVector.addDocuments(documents);// Waiting vectors to be indexed in the vector store.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));const queryResult = await UpstashVector.similaritySearchWithScore( "Vector database", 2);console.log(queryResult);/**[ [ Document { pageContent: 'Upstash Vector is great for building AI applications.', metadata: [Object] }, 0.9016147 ], [ Document { pageContent: 'What are you building?', metadata: [Object] }, 0.8613077 ]] */ #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [UpstashVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` Delete Documents[​](#delete-documents "Direct link to Delete Documents") ------------------------------------------------------------------------ You can also delete the documents you've indexed previously. import { Index } from "@upstash/vector";import { OpenAIEmbeddings } from "@langchain/openai";import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash";const index = new Index({ url: process.env.UPSTASH_VECTOR_REST_URL as string, token: process.env.UPSTASH_VECTOR_REST_TOKEN as string,});const embeddings = new OpenAIEmbeddings({});const UpstashVector = new UpstashVectorStore(embeddings, { index });// Creating the docs to be indexed.const createdAt = new Date().getTime();const IDs = await UpstashVector.addDocuments([ { pageContent: "hello", metadata: { a: createdAt + 1 } }, { pageContent: "car", metadata: { a: createdAt } }, { pageContent: "adjective", metadata: { a: createdAt } }, { pageContent: "hi", metadata: { a: createdAt } },]);// Waiting vectors to be indexed in the vector store.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));await UpstashVector.delete({ ids: [IDs[0], IDs[2], IDs[3]] }); #### API Reference: * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [UpstashVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_upstash.UpstashVectorStore.html) from `@langchain/community/vectorstores/upstash` * * * #### Help us out by providing feedback on this documentation page: [ Previous Typesense ](/v0.1/docs/integrations/vectorstores/typesense/)[ Next USearch ](/v0.1/docs/integrations/vectorstores/usearch/) * [Setup](#setup) * [Create Upstash Vector Client](#create-upstash-vector-client) * [Index and Query Documents](#index-and-query-documents) * [Delete Documents](#delete-documents) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/aiplugin-tool/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * ChatGPT Plugins ChatGPT Plugins =============== This example shows how to use ChatGPT Plugins within LangChain abstractions. Note 1: This currently only works for plugins with no auth. Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR! tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { RequestsGetTool, RequestsPostTool } from "langchain/tools";import { AIPluginTool } from "@langchain/community/tools/aiplugin";export const run = async () => { const tools = [ new RequestsGetTool(), new RequestsPostTool(), await AIPluginTool.fromPluginUrl( "https://www.klarna.com/.well-known/ai-plugin.json" ), ]; const executor = await initializeAgentExecutorWithOptions( tools, new ChatOpenAI({ temperature: 0 }), { agentType: "chat-zero-shot-react-description", verbose: true } ); const result = await executor.invoke({ input: "what t shirts are available in klarna?", }); console.log({ result });}; #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [RequestsGetTool](https://api.js.langchain.com/classes/langchain_tools.RequestsGetTool.html) from `langchain/tools` * [RequestsPostTool](https://api.js.langchain.com/classes/langchain_tools.RequestsPostTool.html) from `langchain/tools` * [AIPluginTool](https://api.js.langchain.com/classes/langchain_community_tools_aiplugin.AIPluginTool.html) from `@langchain/community/tools/aiplugin` Entering new agent_executor chain...Thought: Klarna is a payment provider, not a store. I need to check if there is a Klarna Shopping API that I can use to search for t-shirts.Action:```{"action": "KlarnaProducts","action_input": ""}```Usage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.OpenAPI Spec: {"openapi":"3.0.1","info":{"version":"v0","title":"Open AI Klarna product Api"},"servers":[{"url":"https://www.klarna.com/us/shopping"}],"tags":[{"name":"open-ai-product-endpoint","description":"Open AI Product Endpoint. Query for products."}],"paths":{"/public/openai/v0/products":{"get":{"tags":["open-ai-product-endpoint"],"summary":"API for fetching Klarna product information","operationId":"productsUsingGET","parameters":[{"name":"q","in":"query","description":"query, must be between 2 and 100 characters","required":true,"schema":{"type":"string"}},{"name":"size","in":"query","description":"number of products returned","required":false,"schema":{"type":"integer"}},{"name":"budget","in":"query","description":"maximum price of the matching product in local currency, filters results","required":false,"schema":{"type":"integer"}}],"responses":{"200":{"description":"Products found","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ProductResponse"}}}},"503":{"description":"one or more services are unavailable"}},"deprecated":false}}},"components":{"schemas":{"Product":{"type":"object","properties":{"attributes":{"type":"array","items":{"type":"string"}},"name":{"type":"string"},"price":{"type":"string"},"url":{"type":"string"}},"title":"Product"},"ProductResponse":{"type":"object","properties":{"products":{"type":"array","items":{"$ref":"#/components/schemas/Product"}}},"title":"ProductResponse"}}}}Now that I know there is a Klarna Shopping API, I can use it to search for t-shirts. I will make a GET request to the API with the query parameter "t-shirt".Action:```{"action": "requests_get","action_input": "https://www.klarna.com/us/shopping/public/openai/v0/products?q=t-shirt"}```{"products":[{"name":"Psycho Bunny Mens Copa Gradient Logo Graphic Tee","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203663222/Clothing/Psycho-Bunny-Mens-Copa-Gradient-Logo-Graphic-Tee/?source=openai","price":"$35.00","attributes":["Material:Cotton","Target Group:Man","Color:White,Blue,Black,Orange"]},{"name":"T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203506327/Clothing/T-shirt/?source=openai","price":"$20.45","attributes":["Material:Cotton","Target Group:Man","Color:Gray,White,Blue,Black,Orange"]},{"name":"Palm Angels Bear T-shirt - Black","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201090513/Clothing/Palm-Angels-Bear-T-shirt-Black/?source=openai","price":"$168.36","attributes":["Material:Cotton","Target Group:Man","Color:Black"]},{"name":"Tommy Hilfiger Essential Flag Logo T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201840629/Clothing/Tommy-Hilfiger-Essential-Flag-Logo-T-shirt/?source=openai","price":"$22.52","attributes":["Material:Cotton","Target Group:Man","Color:Red,Gray,White,Blue,Black","Pattern:Solid Color","Environmental Attributes :Organic"]},{"name":"Coach Outlet Signature T Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203005573/Clothing/Coach-Outlet-Signature-T-Shirt/?source=openai","price":"$75.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray"]}]}Finished chain.{ result: { output: 'The available t-shirts in Klarna are Psycho Bunny Mens Copa Gradient Logo Graphic Tee, T-shirt, Palm Angels Bear T-shirt - Black, Tommy Hilfiger Essential Flag Logo T-shirt, and Coach Outlet Signature T Shirt.', intermediateSteps: [ [Object], [Object] ] }} * * * #### Help us out by providing feedback on this documentation page: [ Previous Tools ](/v0.1/docs/integrations/tools/)[ Next Connery Action Tool ](/v0.1/docs/integrations/tools/connery/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/connery/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Connery Action Tool On this page Connery Action Tool =================== Using this tool, you can integrate individual Connery Action into your LangChain agent. note If you want to use more than one Connery Action in your agent, check out the [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) documentation. What is Connery?[​](#what-is-connery "Direct link to What is Connery?") ----------------------------------------------------------------------- Connery is an open-source plugin infrastructure for AI. With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent. Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features. Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience. Learn more about Connery: * GitHub: [https://github.com/connery-io/connery](https://github.com/connery-io/connery) * Documentation: [https://docs.connery.io](https://docs.connery.io) Prerequisites[​](#prerequisites "Direct link to Prerequisites") --------------------------------------------------------------- To use Connery Actions in your LangChain agent, you need to do some preparation: 1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide. 2. Install all the plugins with the actions you want to use in your agent. 3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner. Example of using Connery Action Tool[​](#example-of-using-connery-action-tool "Direct link to Example of using Connery Action Tool") ------------------------------------------------------------------------------------------------------------------------------------ ### Setup[​](#setup "Direct link to Setup") To use the Connery Action Tool you need to install the following official peer dependency: * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). ### Usage[​](#usage "Direct link to Usage") In the example below, we fetch action by its ID from the Connery Runner and then call it with the specified parameters. Here, we use the ID of the **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin. info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/c4b6723d-f91c-440c-8682-16ec8297a602/r). import { ConneryService } from "@langchain/community/tools/connery";import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";// Specify your Connery Runner credentials.process.env.CONNERY_RUNNER_URL = "";process.env.CONNERY_RUNNER_API_KEY = "";// Specify OpenAI API key.process.env.OPENAI_API_KEY = "";// Specify your email address to receive the emails from examples below.const recepientEmail = "test@example.com";// Get the SendEmail action from the Connery Runner by ID.const conneryService = new ConneryService();const sendEmailAction = await conneryService.getAction( "CABC80BB79C15067CA983495324AE709");// Run the action manually.const manualRunResult = await sendEmailAction.invoke({ recipient: recepientEmail, subject: "Test email", body: "This is a test email sent by Connery.",});console.log(manualRunResult);// Run the action using the OpenAI Functions agent.const llm = new ChatOpenAI({ temperature: 0 });const agent = await initializeAgentExecutorWithOptions([sendEmailAction], llm, { agentType: "openai-functions", verbose: true,});const agentRunResult = await agent.invoke({ input: `Send an email to the ${recepientEmail} and say that I will be late for the meeting.`,});console.log(agentRunResult); #### API Reference: * [ConneryService](https://api.js.langchain.com/classes/langchain_community_tools_connery.ConneryService.html) from `@langchain/community/tools/connery` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` note Connery Action is a structured tool, so you can only use it in the agents supporting structured tools. * * * #### Help us out by providing feedback on this documentation page: [ Previous ChatGPT Plugins ](/v0.1/docs/integrations/tools/aiplugin-tool/)[ Next Dall-E Tool ](/v0.1/docs/integrations/tools/dalle/) * [What is Connery?](#what-is-connery) * [Prerequisites](#prerequisites) * [Example of using Connery Action Tool](#example-of-using-connery-action-tool) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/dalle/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Dall-E Tool Dall-E Tool =========== The Dall-E tool allows your agent to create images using OpenAI's Dall-E image generation tool. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need an OpenAI API Key which you can get from the [OpenAI web site](https://openai.com) and then set the OPENAI\_API\_KEY environment variable to the key you just created. To use the Dall-E Tool you need to install the LangChain OpenAI integration package: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai /* eslint-disable no-process-env */import { DallEAPIWrapper } from "@langchain/openai";const tool = new DallEAPIWrapper({ n: 1, // Default model: "dall-e-3", // Default apiKey: process.env.OPENAI_API_KEY, // Default});const imageURL = await tool.invoke("a painting of a cat");console.log(imageURL); #### API Reference: * [DallEAPIWrapper](https://api.js.langchain.com/classes/langchain_openai.DallEAPIWrapper.html) from `@langchain/openai` * * * #### Help us out by providing feedback on this documentation page: [ Previous Connery Action Tool ](/v0.1/docs/integrations/tools/connery/)[ Next Discord Tool ](/v0.1/docs/integrations/tools/discord/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/discord/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Discord Tool Discord Tool ============ The Discord Tool gives your agent the ability to search, read, and write messages to discord channels. It is useful for when you need to interact with a discord channel. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Discord Tool you need to install the following official peer depencency: * npm * Yarn * pnpm npm install discord.js yarn add discord.js pnpm add discord.js Usage, standalone[​](#usage-standalone "Direct link to Usage, standalone") -------------------------------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { DiscordGetMessagesTool, DiscordChannelSearchTool, DiscordSendMessagesTool, DiscordGetGuildsTool, DiscordGetTextChannelsTool,} from "@langchain/community/tools/discord";// Get messages from a channel given channel IDconst getMessageTool = new DiscordGetMessagesTool();const messageResults = await getMessageTool.invoke("1153400523718938780");console.log(messageResults);// Get guilds/serversconst getGuildsTool = new DiscordGetGuildsTool();const guildResults = await getGuildsTool.invoke("");console.log(guildResults);// Search results in a given channel (case-insensitive)const searchTool = new DiscordChannelSearchTool();const searchResults = await searchTool.invoke("Test");console.log(searchResults);// Get all text channels of a serverconst getChannelsTool = new DiscordGetTextChannelsTool();const channelResults = await getChannelsTool.invoke("1153400523718938775");console.log(channelResults);// Send a messageconst sendMessageTool = new DiscordSendMessagesTool();const sendMessageResults = await sendMessageTool.invoke("test message");console.log(sendMessageResults); #### API Reference: * [DiscordGetMessagesTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetMessagesTool.html) from `@langchain/community/tools/discord` * [DiscordChannelSearchTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordChannelSearchTool.html) from `@langchain/community/tools/discord` * [DiscordSendMessagesTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordSendMessagesTool.html) from `@langchain/community/tools/discord` * [DiscordGetGuildsTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetGuildsTool.html) from `@langchain/community/tools/discord` * [DiscordGetTextChannelsTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordGetTextChannelsTool.html) from `@langchain/community/tools/discord` Usage, in an Agent[​](#usage-in-an-agent "Direct link to Usage, in an Agent") ----------------------------------------------------------------------------- import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { DiscordSendMessagesTool } from "@langchain/community/tools/discord";import { DadJokeAPI } from "@langchain/community/tools/dadjokeapi";const model = new ChatOpenAI({ temperature: 0,});const tools = [new DiscordSendMessagesTool(), new DadJokeAPI()];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const res = await executor.invoke({ input: `Tell a joke in the discord channel`,});console.log(res.output);// "What's the best thing about elevator jokes? They work on so many levels." #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [DiscordSendMessagesTool](https://api.js.langchain.com/classes/langchain_community_tools_discord.DiscordSendMessagesTool.html) from `@langchain/community/tools/discord` * [DadJokeAPI](https://api.js.langchain.com/classes/langchain_community_tools_dadjokeapi.DadJokeAPI.html) from `@langchain/community/tools/dadjokeapi` * * * #### Help us out by providing feedback on this documentation page: [ Previous Dall-E Tool ](/v0.1/docs/integrations/tools/dalle/)[ Next DuckDuckGoSearch ](/v0.1/docs/integrations/tools/duckduckgo_search/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/duckduckgo_search/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * DuckDuckGoSearch DuckDuckGoSearch ================ DuckDuckGoSearch offers a privacy-focused search API designed for LLM Agents. It provides seamless integration with a wide range of data sources, prioritizing user privacy and relevant search results. Setup[​](#setup "Direct link to Setup") --------------------------------------- Install the `@langchain/community` package, along with the `duck-duck-scrape` dependency: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community * npm * Yarn * pnpm npm install duck-duck-scrape yarn add duck-duck-scrape pnpm add duck-duck-scrape Usage[​](#usage "Direct link to Usage") --------------------------------------- You can call `.invoke` on `DuckDuckGoSearch` to search for a query: import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search";// Instantiate the DuckDuckGoSearch tool.const tool = new DuckDuckGoSearch({ maxResults: 1 });// Get the results of a query by calling .invoke on the tool.const result = await tool.invoke( "What is Anthropic's estimated revenue for 2024?");console.log(result);/*[{ "title": "Anthropic forecasts more than $850 mln in annualized revenue rate by ...", "link": "https://www.reuters.com/technology/anthropic-forecasts-more-than-850-mln-annualized-revenue-rate-by-2024-end-report-2023-12-26/", "snippet": "Dec 26 (Reuters) - Artificial intelligence startup <b>Anthropic</b> has projected it will generate more than $850 million in annualized <b>revenue</b> by the end of <b>2024</b>, the Information reported on Tuesday ..."}]*/ #### API Reference: * [DuckDuckGoSearch](https://api.js.langchain.com/classes/langchain_community_tools_duckduckgo_search.DuckDuckGoSearch.html) from `@langchain/community/tools/duckduckgo_search` tip See the LangSmith trace [here](https://smith.langchain.com/public/c352faaf-e617-4779-a943-96f963dc19a5/r) ### With an agent[​](#with-an-agent "Direct link to With an agent") import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";// Define the tools the agent will have access to.const tools = [new DuckDuckGoSearch({ maxResults: 1 })];// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const llm = new ChatOpenAI({ model: "gpt-4-turbo-preview", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "What is Anthropic's estimated revenue for 2024?",});console.log(result);/*{ input: "What is Anthropic's estimated revenue for 2024?", output: 'Anthropic has projected that it will generate more than $850 million in annualized revenue by the end of 2024. For more details, you can refer to the [Reuters article](https://www.reuters.com/technology/anthropic-forecasts-more-than-850-mln-annualized-revenue-rate-by-2024-end-report-2023-12-26/).'}*/ #### API Reference: * [DuckDuckGoSearch](https://api.js.langchain.com/classes/langchain_community_tools_duckduckgo_search.DuckDuckGoSearch.html) from `@langchain/community/tools/duckduckgo_search` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [pull](https://api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` tip See the LangSmith trace for the Agent example [here](https://smith.langchain.com/public/48f84a32-4fb5-4863-a8cd-324abebfce91/r) * * * #### Help us out by providing feedback on this documentation page: [ Previous Discord Tool ](/v0.1/docs/integrations/tools/discord/)[ Next Exa Search ](/v0.1/docs/integrations/tools/exa_search/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/exa_search/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Exa Search Exa Search ========== Exa (formerly Metaphor Search) is a search engine fully designed for use by LLMs. Search for documents on the internet using natural language queries, then retrieve cleaned HTML content from desired documents. Unlike keyword-based search (Google), Exa's neural search capabilities allow it to semantically understand queries and return relevant documents. For example, we could search `"fascinating article about cats"` and compare the search results from Google and Exa. Google gives us SEO-optimized listicles based on the keyword “fascinating”. Exa just works. This notebook goes over how to use Exa Search with LangChain. First, get an Exa API key and add it as an environment variable. Get 1000 free searches/month by [signing up here.](https://dashboard.exa.ai/login) Usage[​](#usage "Direct link to Usage") --------------------------------------- First, install the LangChain integration package for Exa: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/exa @langchain/openai langchain yarn add @langchain/exa @langchain/openai langchain pnpm add @langchain/exa @langchain/openai langchain You'll need to set your API key as an environment variable. The `Exa` class defaults to `EXASEARCH_API_KEY` when searching for your API key. Usage[​](#usage-1 "Direct link to Usage") ----------------------------------------- import { ExaSearchResults } from "@langchain/exa";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";import Exa from "exa-js";import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";// Define the tools the agent will have access to.const tools = [ new ExaSearchResults({ // @ts-expect-error Some TS Config's will cause this to give a TypeScript error, even though it works. client: new Exa(process.env.EXASEARCH_API_KEY), }),];// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "what is the weather in wailea?",});console.log(result);/*{ input: 'what is the weather in wailea?', output: 'I found a weather forecast for Wailea-Makena on Windfinder.com. You can check the forecast [here](https://www.windfinder.com/forecast/wailea-makena).'}*/ #### API Reference: * [ExaSearchResults](https://api.js.langchain.com/classes/langchain_exa.ExaSearchResults.html) from `@langchain/exa` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [pull](https://api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` tip You can see a LangSmith trace for this example [here](https://smith.langchain.com/public/775ea9a8-d54c-405c-9126-a012405d9099/r). Using the Exa SDK as LangChain Agent Tools[​](#using-the-exa-sdk-as-langchain-agent-tools "Direct link to Using the Exa SDK as LangChain Agent Tools") ------------------------------------------------------------------------------------------------------------------------------------------------------ We can create LangChain tools which use the [`ExaRetriever`](/v0.1/docs/integrations/retrievers/exa/) and the [`createRetrieverTool`](https://api.js.langchain.com/functions/langchain_tools_retriever.createRetrieverTool.html) Using these tools we can construct a simple search agent that can answer questions about any topic. import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import Exa from "exa-js";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";import { createRetrieverTool } from "langchain/tools/retriever";import { ExaRetriever } from "@langchain/exa";// @ts-expect-error Some TS Config's will cause this to give a TypeScript error, even though it works.const client: Exa.default = new Exa(process.env.EXASEARCH_API_KEY);const exaRetriever = new ExaRetriever({ client, searchArgs: { numResults: 2, },});// Convert the ExaRetriever into a toolconst searchTool = createRetrieverTool(exaRetriever, { name: "search", description: "Get the contents of a webpage given a string search query.",});const tools = [searchTool];const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are a web researcher who answers user questions by looking up information on the internet and retrieving contents of helpful documents. Cite your sources.`, ], ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const agentExecutor = new AgentExecutor({ agent: await createOpenAIFunctionsAgent({ llm, tools, prompt, }), tools,});console.log( await agentExecutor.invoke({ input: "Summarize for me a fascinating article about cats.", })); #### API Reference: * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [createOpenAIFunctionsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents` * [createRetrieverTool](https://api.js.langchain.com/functions/langchain_tools_retriever.createRetrieverTool.html) from `langchain/tools/retriever` * [ExaRetriever](https://api.js.langchain.com/classes/langchain_exa.ExaRetriever.html) from `@langchain/exa` { input: 'Summarize for me a fascinating article about cats.', output: 'The article discusses the research of biologist Jaroslav Flegr, who has been investigating the effects of a single-celled parasite called Toxoplasma gondii (T. gondii or Toxo), which is excreted by cats in their feces. Flegr began to suspect in the early 1990s that this parasite was subtly manipulating his personality, causing him to behave in strange, often self-destructive ways. He reasoned that if it was affecting him, it was probably doing the same to others.\n' + '\n' + "T. gondii is the microbe that causes toxoplasmosis, a disease that can be transmitted from a pregnant woman to her fetus, potentially resulting in severe brain damage or death. It's also a major threat to people with weakened immunity. However, healthy children and adults usually experience nothing worse than brief flu-like symptoms before quickly fighting off the protozoan, which then lies dormant inside brain cells.\n" + '\n' + "Flegr's research is unconventional and suggests that these tiny organisms carried by house cats could be creeping into our brains, causing everything from car wrecks to schizophrenia.\n" + '\n' + '(Source: [The Atlantic](https://www.theatlantic.com/magazine/archive/2012/03/how-your-cat-is-making-you-crazy/308873/))'} tip You can see a LangSmith trace for this example [here](https://smith.langchain.com/public/d123ba5f-8535-4669-9e43-ac7ab3c6735e/r). * * * #### Help us out by providing feedback on this documentation page: [ Previous DuckDuckGoSearch ](/v0.1/docs/integrations/tools/duckduckgo_search/)[ Next Gmail Tool ](/v0.1/docs/integrations/tools/gmail/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/gmail/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Gmail Tool Gmail Tool ========== The Gmail Tool allows your agent to create and view messages from a linked email account. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to get an API key from [Google here](https://developers.google.com/gmail/api/guides) and [enable the new Gmail API](https://console.cloud.google.com/apis/library/gmail.googleapis.com). Then, set the environment variables for `GMAIL_CLIENT_EMAIL`, and either `GMAIL_PRIVATE_KEY`, or `GMAIL_KEYFILE`. To use the Gmail Tool you need to install the following official peer dependency: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai googleapis @langchain/community yarn add @langchain/openai googleapis @langchain/community pnpm add @langchain/openai googleapis @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "@langchain/openai";import { GmailCreateDraft, GmailGetMessage, GmailGetThread, GmailSearch, GmailSendMessage,} from "@langchain/community/tools/gmail";import { StructuredTool } from "@langchain/core/tools";export async function run() { const model = new OpenAI({ temperature: 0, apiKey: process.env.OPENAI_API_KEY, }); // These are the default parameters for the Gmail tools // const gmailParams = { // credentials: { // clientEmail: process.env.GMAIL_CLIENT_EMAIL, // privateKey: process.env.GMAIL_PRIVATE_KEY, // }, // scopes: ["https://mail.google.com/"], // }; // For custom parameters, uncomment the code above, replace the values with your own, and pass it to the tools below const tools: StructuredTool[] = [ new GmailCreateDraft(), new GmailGetMessage(), new GmailGetThread(), new GmailSearch(), new GmailSendMessage(), ]; const gmailAgent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "structured-chat-zero-shot-react-description", verbose: true, }); const createInput = `Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot who is looking to collaborate on some research with her estranged friend, a cat. Under no circumstances may you send the message, however.`; const createResult = await gmailAgent.invoke({ input: createInput }); // Create Result { // output: 'I have created a draft email for you to edit. The draft Id is r5681294731961864018.' // } console.log("Create Result", createResult); const viewInput = `Could you search in my drafts for the latest email?`; const viewResult = await gmailAgent.invoke({ input: viewInput }); // View Result { // output: "The latest email in your drafts is from hopefulparrot@gmail.com with the subject 'Collaboration Opportunity'. The body of the email reads: 'Dear [Friend], I hope this letter finds you well. I am writing to you in the hopes of rekindling our friendship and to discuss the possibility of collaborating on some research together. I know that we have had our differences in the past, but I believe that we can put them aside and work together for the greater good. I look forward to hearing from you. Sincerely, [Parrot]'" // } console.log("View Result", viewResult);} #### API Reference: * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [GmailCreateDraft](https://api.js.langchain.com/classes/langchain_community_tools_gmail.GmailCreateDraft.html) from `@langchain/community/tools/gmail` * [GmailGetMessage](https://api.js.langchain.com/classes/langchain_community_tools_gmail.GmailGetMessage.html) from `@langchain/community/tools/gmail` * [GmailGetThread](https://api.js.langchain.com/classes/langchain_community_tools_gmail.GmailGetThread.html) from `@langchain/community/tools/gmail` * [GmailSearch](https://api.js.langchain.com/classes/langchain_community_tools_gmail.GmailSearch.html) from `@langchain/community/tools/gmail` * [GmailSendMessage](https://api.js.langchain.com/classes/langchain_community_tools_gmail.GmailSendMessage.html) from `@langchain/community/tools/gmail` * [StructuredTool](https://api.js.langchain.com/classes/langchain_core_tools.StructuredTool.html) from `@langchain/core/tools` * * * #### Help us out by providing feedback on this documentation page: [ Previous Exa Search ](/v0.1/docs/integrations/tools/exa_search/)[ Next Google Calendar Tool ](/v0.1/docs/integrations/tools/google_calendar/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/lambda_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Agent with AWS Lambda Agent with AWS Lambda Integration ================================= Full docs here: [https://docs.aws.amazon.com/lambda/index.html](https://docs.aws.amazon.com/lambda/index.html) **AWS Lambda** is a serverless computing service provided by Amazon Web Services (AWS), designed to allow developers to build and run applications and services without the need for provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. By including a AWSLambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need. When an Agent uses the AWSLambda tool, it will provide an argument of type `string` which will in turn be passed into the Lambda function via the `event` parameter. This quick start will demonstrate how an Agent could use a Lambda function to send an email via [Amazon Simple Email Service](https://aws.amazon.com/ses/). The lambda code which sends the email is not provided, but if you'd like to learn how this could be done, see [here](https://repost.aws/knowledge-center/lambda-send-email-ses). Keep in mind this is an intentionally simple example; Lambda can used to execute code for a near infinite number of other purposes (including executing more Langchains)! ### Note about credentials:[​](#note-about-credentials "Direct link to Note about credentials:") * If you have not run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via the AWS CLI, the `region`, `accessKeyId`, and `secretAccessKey` must be provided to the AWSLambda constructor. * The IAM role corresponding to those credentials must have permission to invoke the lambda function. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { SerpAPI } from "langchain/tools";import { AWSLambda } from "langchain/tools/aws_lambda";import { initializeAgentExecutorWithOptions } from "langchain/agents";const model = new OpenAI({ temperature: 0 });const emailSenderTool = new AWSLambda({ name: "email-sender", // tell the Agent precisely what the tool does description: "Sends an email with the specified content to testing123@gmail.com", region: "us-east-1", // optional: AWS region in which the function is deployed accessKeyId: "abc123", // optional: access key id for a IAM user with invoke permissions secretAccessKey: "xyz456", // optional: secret access key for that IAM user functionName: "SendEmailViaSES", // the function name as seen in AWS Console});const tools = [emailSenderTool, new SerpAPI("api_key_goes_here")];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Find out the capital of Croatia. Once you have it, email the answer to testing123@gmail.com.`;const result = await executor.invoke({ input });console.log(result); * * * #### Help us out by providing feedback on this documentation page: [ Previous Google Places Tool ](/v0.1/docs/integrations/tools/google_places/)[ Next Python interpreter tool ](/v0.1/docs/integrations/tools/pyinterpreter/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/google_places/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Google Places Tool Google Places Tool ================== The Google Places Tool allows your agent to utilize the Google Places API in order to find addresses, phone numbers, website, etc. from text about a location listed on Google Places. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to get an API key from [Google here](https://developers.google.com/maps/documentation/places/web-service/overview) and [enable the new Places API](https://console.cloud.google.com/apis/library/places.googleapis.com). Then, set your API key as `process.env.GOOGLE_PLACES_API_KEY` or pass it in as an `apiKey` constructor argument. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { GooglePlacesAPI } from "@langchain/community/tools/google_places";import { OpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";export async function run() { const model = new OpenAI({ temperature: 0, }); const tools = [new GooglePlacesAPI()]; const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); const res = await executor.invoke({ input: "Where is the University of Toronto - Scarborough? ", }); console.log(res.output);} #### API Reference: * [GooglePlacesAPI](https://api.js.langchain.com/classes/langchain_community_tools_google_places.GooglePlacesAPI.html) from `@langchain/community/tools/google_places` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * * * #### Help us out by providing feedback on this documentation page: [ Previous Google Calendar Tool ](/v0.1/docs/integrations/tools/google_calendar/)[ Next Agent with AWS Lambda ](/v0.1/docs/integrations/tools/lambda_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/google_calendar/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Google Calendar Tool Google Calendar Tool ==================== The Google Calendar Tools allow your agent to create and view Google Calendar events from a linked calendar. Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Google Calendar Tools you need to install the following official peer dependency: * npm * Yarn * pnpm npm install googleapis yarn add googleapis pnpm add googleapis Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "@langchain/openai";import { Calculator } from "@langchain/community/tools/calculator";import { GoogleCalendarCreateTool, GoogleCalendarViewTool,} from "@langchain/community/tools/google_calendar";export async function run() { const model = new OpenAI({ temperature: 0, apiKey: process.env.OPENAI_API_KEY, }); const googleCalendarParams = { credentials: { clientEmail: process.env.GOOGLE_CALENDAR_CLIENT_EMAIL, privateKey: process.env.GOOGLE_CALENDAR_PRIVATE_KEY, calendarId: process.env.GOOGLE_CALENDAR_CALENDAR_ID, }, scopes: [ "https://www.googleapis.com/auth/calendar", "https://www.googleapis.com/auth/calendar.events", ], model, }; const tools = [ new Calculator(), new GoogleCalendarCreateTool(googleCalendarParams), new GoogleCalendarViewTool(googleCalendarParams), ]; const calendarAgent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); const createInput = `Create a meeting with John Doe next Friday at 4pm - adding to the agenda of it the result of 99 + 99`; const createResult = await calendarAgent.invoke({ input: createInput }); // Create Result { // output: 'A meeting with John Doe on 29th September at 4pm has been created and the result of 99 + 99 has been added to the agenda.' // } console.log("Create Result", createResult); const viewInput = `What meetings do I have this week?`; const viewResult = await calendarAgent.invoke({ input: viewInput }); // View Result { // output: "You have no meetings this week between 8am and 8pm." // } console.log("View Result", viewResult);} #### API Reference: * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [GoogleCalendarCreateTool](https://api.js.langchain.com/classes/langchain_community_tools_google_calendar.GoogleCalendarCreateTool.html) from `@langchain/community/tools/google_calendar` * [GoogleCalendarViewTool](https://api.js.langchain.com/classes/langchain_community_tools_google_calendar.GoogleCalendarViewTool.html) from `@langchain/community/tools/google_calendar` * * * #### Help us out by providing feedback on this documentation page: [ Previous Gmail Tool ](/v0.1/docs/integrations/tools/gmail/)[ Next Google Places Tool ](/v0.1/docs/integrations/tools/google_places/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/pyinterpreter/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Python interpreter tool Python interpreter tool ======================= danger This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it! LangChain offers an experimental tool for executing arbitrary Python code. This can be useful in combination with an LLM that can generate code to perform more powerful computations. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { PythonInterpreterTool } from "langchain/experimental/tools/pyinterpreter";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const prompt = ChatPromptTemplate.fromTemplate( `Generate python code that does {input}. Do not generate anything else.`);const model = new OpenAI({});const interpreter = await PythonInterpreterTool.initialize({ indexURL: "../node_modules/pyodide",});// Note: In Deno, it may be easier to initialize the interpreter yourself:// import pyodideModule from "npm:pyodide/pyodide.js";// import { PythonInterpreterTool } from "npm:langchain/experimental/tools/pyinterpreter";// const pyodide = await pyodideModule.loadPyodide();// const pythonTool = new PythonInterpreterTool({instance: pyodide})const chain = prompt .pipe(model) .pipe(new StringOutputParser()) .pipe(interpreter);const result = await chain.invoke({ input: `prints "Hello LangChain"`,});console.log(JSON.parse(result).stdout);// To install python packages:// This uses the loadPackages command.// This works for packages built with pyodide.await interpreter.addPackage("numpy");// But for other packages, you will want to use micropip.// See: https://pyodide.org/en/stable/usage/loading-packages.html// for more informationawait interpreter.addPackage("micropip");// The following is roughly equivalent to:// pyodide.runPython(`import ${pkgname}; ${pkgname}`);const micropip = interpreter.pyodideInstance.pyimport("micropip");await micropip.install("numpy"); #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [PythonInterpreterTool](https://api.js.langchain.com/classes/langchain_experimental_tools_pyinterpreter.PythonInterpreterTool.html) from `langchain/experimental/tools/pyinterpreter` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Help us out by providing feedback on this documentation page: [ Previous Agent with AWS Lambda ](/v0.1/docs/integrations/tools/lambda_agent/)[ Next SearchApi tool ](/v0.1/docs/integrations/tools/searchapi/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/searchapi/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * SearchApi tool SearchApi tool ============== The `SearchApi` tool connects your agents and chains to the internet. A wrapper around the Search API. This tool is handy when you need to answer questions about current events. Usage[​](#usage "Direct link to Usage") --------------------------------------- Input should be a search query. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { AgentFinish, AgentAction } from "@langchain/core/agents";import { BaseMessageChunk } from "@langchain/core/messages";import { SearchApi } from "@langchain/community/tools/searchapi";const model = new ChatOpenAI({ temperature: 0,});const tools = [ new SearchApi(process.env.SEARCHAPI_API_KEY, { engine: "google_news", }),];const prefix = ChatPromptTemplate.fromMessages([ [ "ai", "Answer the following questions as best you can. In your final answer, use a bulleted list markdown format.", ], ["human", "{input}"],]);// Replace this with your actual output parser.const customOutputParser = ( input: BaseMessageChunk): AgentAction | AgentFinish => ({ log: "test", returnValues: { output: input, },});// Replace this placeholder agent with your actual implementation.const agent = RunnableSequence.from([prefix, model, customOutputParser]);const executor = AgentExecutor.fromAgentAndTools({ agent, tools,});const res = await executor.invoke({ input: "What's happening in Ukraine today?",});console.log(res); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [AgentFinish](https://api.js.langchain.com/types/langchain_core_agents.AgentFinish.html) from `@langchain/core/agents` * [AgentAction](https://api.js.langchain.com/types/langchain_core_agents.AgentAction.html) from `@langchain/core/agents` * [BaseMessageChunk](https://api.js.langchain.com/classes/langchain_core_messages.BaseMessageChunk.html) from `@langchain/core/messages` * [SearchApi](https://api.js.langchain.com/classes/langchain_community_tools_searchapi.SearchApi.html) from `@langchain/community/tools/searchapi` * * * #### Help us out by providing feedback on this documentation page: [ Previous Python interpreter tool ](/v0.1/docs/integrations/tools/pyinterpreter/)[ Next Searxng Search tool ](/v0.1/docs/integrations/tools/searxng/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/searxng/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Searxng Search tool Searxng Search tool =================== The `SearxngSearch` tool connects your agents and chains to the internet. A wrapper around the SearxNG API, this tool is useful for performing meta-search engine queries using the SearxNG API. It is particularly helpful in answering questions about current events. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { BaseMessageChunk } from "@langchain/core/messages";import { AgentAction, AgentFinish } from "@langchain/core/agents";import { RunnableSequence } from "@langchain/core/runnables";import { ChatPromptTemplate } from "@langchain/core/prompts";import { SearxngSearch } from "@langchain/community/tools/searxng_search";const model = new ChatOpenAI({ maxTokens: 1000, model: "gpt-4",});// `apiBase` will be automatically parsed from .env file, set "SEARXNG_API_BASE" in .env,const tools = [ new SearxngSearch({ params: { format: "json", // Do not change this, format other than "json" is will throw error engines: "google", }, // Custom Headers to support rapidAPI authentication Or any instance that requires custom headers headers: {}, }),];const prefix = ChatPromptTemplate.fromMessages([ [ "ai", "Answer the following questions as best you can. In your final answer, use a bulleted list markdown format.", ], ["human", "{input}"],]);// Replace this with your actual output parser.const customOutputParser = ( input: BaseMessageChunk): AgentAction | AgentFinish => ({ log: "test", returnValues: { output: input, },});// Replace this placeholder agent with your actual implementation.const agent = RunnableSequence.from([prefix, model, customOutputParser]);const executor = AgentExecutor.fromAgentAndTools({ agent, tools,});console.log("Loaded agent.");const input = `What is Langchain? Describe in 50 words`;console.log(`Executing with input "${input}"...`);const result = await executor.invoke({ input });console.log(result);/** * Langchain is a framework for developing applications powered by language models, such as chatbots, Generative Question-Answering, summarization, and more. It provides a standard interface, integrations with other tools, and end-to-end chains for common applications. Langchain enables data-aware and powerful applications. */ #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents` * [BaseMessageChunk](https://api.js.langchain.com/classes/langchain_core_messages.BaseMessageChunk.html) from `@langchain/core/messages` * [AgentAction](https://api.js.langchain.com/types/langchain_core_agents.AgentAction.html) from `@langchain/core/agents` * [AgentFinish](https://api.js.langchain.com/types/langchain_core_agents.AgentFinish.html) from `@langchain/core/agents` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [SearxngSearch](https://api.js.langchain.com/classes/langchain_community_tools_searxng_search.SearxngSearch.html) from `@langchain/community/tools/searxng_search` * * * #### Help us out by providing feedback on this documentation page: [ Previous SearchApi tool ](/v0.1/docs/integrations/tools/searchapi/)[ Next StackExchange Tool ](/v0.1/docs/integrations/tools/stackexchange/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/stackexchange/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * StackExchange Tool StackExchange Tool ================== The StackExchange tool connects your agents and chains to StackExchange's API. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { StackExchangeAPI } from "@langchain/community/tools/stackexchange";// Get results from StackExchange APIconst stackExchangeTool = new StackExchangeAPI();const result = await stackExchangeTool.invoke("zsh: command not found: python");console.log(result);// Get results from StackExchange API with title queryconst stackExchangeTitleTool = new StackExchangeAPI({ queryType: "title",});const titleResult = await stackExchangeTitleTool.invoke( "zsh: command not found: python");console.log(titleResult);// Get results from StackExchange API with bad queryconst stackExchangeBadTool = new StackExchangeAPI();const badResult = await stackExchangeBadTool.invoke( "sjefbsmnazdkhbazkbdoaencopebfoubaef");console.log(badResult); #### API Reference: * [StackExchangeAPI](https://api.js.langchain.com/classes/langchain_community_tools_stackexchange.StackExchangeAPI.html) from `@langchain/community/tools/stackexchange` * * * #### Help us out by providing feedback on this documentation page: [ Previous Searxng Search tool ](/v0.1/docs/integrations/tools/searxng/)[ Next Tavily Search ](/v0.1/docs/integrations/tools/tavily_search/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/webbrowser/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Web Browser Tool Web Browser Tool ================ The Webbrowser Tool gives your agent the ability to visit a website and extract information. It is described to the agent as useful for when you need to find something on or summarize a webpage. input should be a comma separated list of "valid URL including protocol","what you want to find on the page or empty string for a summary". It exposes two modes of operation: * when called by the Agent with only a URL it produces a summary of the website contents * when called by the Agent with a URL and a description of what to find it will instead use an in-memory Vector Store to find the most relevant snippets and summarise those Setup[​](#setup "Direct link to Setup") --------------------------------------- To use the Webbrowser Tool you need to install the dependencies: * npm * Yarn * pnpm npm install cheerio axios yarn add cheerio axios pnpm add cheerio axios Usage, standalone[​](#usage-standalone "Direct link to Usage, standalone") -------------------------------------------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { WebBrowser } from "langchain/tools/webbrowser";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";export async function run() { // this will not work with Azure OpenAI API yet // Azure OpenAI API does not support embedding with multiple inputs yet // Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. // So we will fail fast, when Azure OpenAI API is used if (process.env.AZURE_OPENAI_API_KEY) { throw new Error( "Azure OpenAI API does not support embedding with multiple inputs yet" ); } const model = new ChatOpenAI({ temperature: 0 }); const embeddings = new OpenAIEmbeddings( process.env.AZURE_OPENAI_API_KEY ? { azureOpenAIApiDeploymentName: "Embeddings2" } : {} ); const browser = new WebBrowser({ model, embeddings }); const result = await browser.invoke( `"https://www.themarginalian.org/2015/04/09/find-your-bliss-joseph-campbell-power-of-myth","who is joseph campbell"` ); console.log(result); /* Joseph Campbell was a mythologist and writer who discussed spirituality, psychological archetypes, cultural myths, and the mythology of self. He sat down with Bill Moyers for a lengthy conversation at George Lucas’s Skywalker Ranch in California, which continued the following year at the American Museum of Natural History in New York. The resulting 24 hours of raw footage were edited down to six one-hour episodes and broadcast on PBS in 1988, shortly after Campbell’s death, in what became one of the most popular in the history of public television. Relevant Links: - [The Holstee Manifesto](http://holstee.com/manifesto-bp) - [The Silent Music of the Mind: Remembering Oliver Sacks](https://www.themarginalian.org/2015/08/31/remembering-oliver-sacks) - [Joseph Campbell series](http://billmoyers.com/spotlight/download-joseph-campbell-and-the-power-of-myth-audio/) - [Bill Moyers](https://www.themarginalian.org/tag/bill-moyers/) - [books](https://www.themarginalian.org/tag/books/) */} #### API Reference: * [WebBrowser](https://api.js.langchain.com/classes/langchain_tools_webbrowser.WebBrowser.html) from `langchain/tools/webbrowser` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` Usage, in an Agent[​](#usage-in-an-agent "Direct link to Usage, in an Agent") ----------------------------------------------------------------------------- import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { Calculator } from "@langchain/community/tools/calculator";import { WebBrowser } from "langchain/tools/webbrowser";import { SerpAPI } from "@langchain/community/tools/serpapi";export const run = async () => { const model = new OpenAI({ temperature: 0 }); const embeddings = new OpenAIEmbeddings(); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), new WebBrowser({ model, embeddings }), ]; const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true, }); console.log("Loaded agent."); const input = `What is the word of the day on merriam webster. What is the top result on google for that word`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); /* Entering new agent_executor chain... I need to find the word of the day on Merriam Webster and then search for it on Google Action: web-browser Action Input: "https://www.merriam-webster.com/word-of-the-day", "" Summary: Merriam-Webster is a website that provides users with a variety of resources, including a dictionary, thesaurus, word finder, word of the day, games and quizzes, and more. The website also allows users to log in and save words, view recents, and access their account settings. The Word of the Day for April 14, 2023 is "lackadaisical", which means lacking in life, spirit, or zest. The website also provides quizzes and games to help users build their vocabulary. Relevant Links: - [Test Your Vocabulary](https://www.merriam-webster.com/games) - [Thesaurus](https://www.merriam-webster.com/thesaurus) - [Word Finder](https://www.merriam-webster.com/wordfinder) - [Word of the Day](https://www.merriam-webster.com/word-of-the-day) - [Shop](https://shop.merriam-webster.com/?utm_source=mwsite&utm_medium=nav&utm_content= I now need to search for the word of the day on Google Action: search Action Input: "lackadaisical" lackadaisical implies a carefree indifference marked by half-hearted efforts. lackadaisical college seniors pretending to study. listless suggests a lack of ... Finished chain. */ console.log(`Got output ${JSON.stringify(result, null, 2)}`); /* Got output { "output": "The word of the day on Merriam Webster is \"lackadaisical\", which implies a carefree indifference marked by half-hearted efforts.", "intermediateSteps": [ { "action": { "tool": "web-browser", "toolInput": "https://www.merriam-webster.com/word-of-the-day\", ", "log": " I need to find the word of the day on Merriam Webster and then search for it on Google\nAction: web-browser\nAction Input: \"https://www.merriam-webster.com/word-of-the-day\", \"\"" }, "observation": "\n\nSummary: Merriam-Webster is a website that provides users with a variety of resources, including a dictionary, thesaurus, word finder, word of the day, games and quizzes, and more. The website also allows users to log in and save words, view recents, and access their account settings. The Word of the Day for April 14, 2023 is \"lackadaisical\", which means lacking in life, spirit, or zest. The website also provides quizzes and games to help users build their vocabulary.\n\nRelevant Links: \n- [Test Your Vocabulary](https://www.merriam-webster.com/games)\n- [Thesaurus](https://www.merriam-webster.com/thesaurus)\n- [Word Finder](https://www.merriam-webster.com/wordfinder)\n- [Word of the Day](https://www.merriam-webster.com/word-of-the-day)\n- [Shop](https://shop.merriam-webster.com/?utm_source=mwsite&utm_medium=nav&utm_content=" }, { "action": { "tool": "search", "toolInput": "lackadaisical", "log": " I now need to search for the word of the day on Google\nAction: search\nAction Input: \"lackadaisical\"" }, "observation": "lackadaisical implies a carefree indifference marked by half-hearted efforts. lackadaisical college seniors pretending to study. listless suggests a lack of ..." } ] } */}; #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator` * [WebBrowser](https://api.js.langchain.com/classes/langchain_tools_webbrowser.WebBrowser.html) from `langchain/tools/webbrowser` * [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` * * * #### Help us out by providing feedback on this documentation page: [ Previous Tavily Search ](/v0.1/docs/integrations/tools/tavily_search/)[ Next Wikipedia tool ](/v0.1/docs/integrations/tools/wikipedia/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/wolframalpha/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * WolframAlpha Tool WolframAlpha Tool ================= The WolframAlpha tool connects your agents and chains to WolframAlpha's state-of-the-art computational intelligence engine. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to create an app from the [WolframAlpha portal](https://developer.wolframalpha.com/) and obtain an `appid`. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { WolframAlphaTool } from "@langchain/community/tools/wolframalpha";const tool = new WolframAlphaTool({ appid: "YOUR_APP_ID",});const res = await tool.invoke("What is 2 * 2?");console.log(res); #### API Reference: * [WolframAlphaTool](https://api.js.langchain.com/classes/langchain_community_tools_wolframalpha.WolframAlphaTool.html) from `@langchain/community/tools/wolframalpha` * * * #### Help us out by providing feedback on this documentation page: [ Previous Wikipedia tool ](/v0.1/docs/integrations/tools/wikipedia/)[ Next Agent with Zapier NLA Integration ](/v0.1/docs/integrations/tools/zapier_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/wikipedia/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Wikipedia tool Wikipedia tool ============== The `WikipediaQueryRun` tool connects your agents and chains to Wikipedia. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";const tool = new WikipediaQueryRun({ topKResults: 3, maxDocContentLength: 4000,});const res = await tool.invoke("Langchain");console.log(res); #### API Reference: * [WikipediaQueryRun](https://api.js.langchain.com/classes/langchain_community_tools_wikipedia_query_run.WikipediaQueryRun.html) from `@langchain/community/tools/wikipedia_query_run` * * * #### Help us out by providing feedback on this documentation page: [ Previous Web Browser Tool ](/v0.1/docs/integrations/tools/webbrowser/)[ Next WolframAlpha Tool ](/v0.1/docs/integrations/tools/wolframalpha/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/tools/zapier_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [ChatGPT Plugins](/v0.1/docs/integrations/tools/aiplugin-tool/) * [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) * [Dall-E Tool](/v0.1/docs/integrations/tools/dalle/) * [Discord Tool](/v0.1/docs/integrations/tools/discord/) * [DuckDuckGoSearch](/v0.1/docs/integrations/tools/duckduckgo_search/) * [Exa Search](/v0.1/docs/integrations/tools/exa_search/) * [Gmail Tool](/v0.1/docs/integrations/tools/gmail/) * [Google Calendar Tool](/v0.1/docs/integrations/tools/google_calendar/) * [Google Places Tool](/v0.1/docs/integrations/tools/google_places/) * [Agent with AWS Lambda](/v0.1/docs/integrations/tools/lambda_agent/) * [Python interpreter tool](/v0.1/docs/integrations/tools/pyinterpreter/) * [SearchApi tool](/v0.1/docs/integrations/tools/searchapi/) * [Searxng Search tool](/v0.1/docs/integrations/tools/searxng/) * [StackExchange Tool](/v0.1/docs/integrations/tools/stackexchange/) * [Tavily Search](/v0.1/docs/integrations/tools/tavily_search/) * [Web Browser Tool](/v0.1/docs/integrations/tools/webbrowser/) * [Wikipedia tool](/v0.1/docs/integrations/tools/wikipedia/) * [WolframAlpha Tool](/v0.1/docs/integrations/tools/wolframalpha/) * [Agent with Zapier NLA Integration](/v0.1/docs/integrations/tools/zapier_agent/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Tools](/v0.1/docs/integrations/tools/) * Agent with Zapier NLA Integration Agent with Zapier NLA Integration ================================= Full docs here: [https://nla.zapier.com/start/](https://nla.zapier.com/start/) **Zapier Natural Language Actions** gives you access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. NLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets, Microsoft Teams, and thousands more apps: [https://zapier.com/apps](https://zapier.com/apps) Zapier NLA handles ALL the underlying API auth and translation from natural language --> underlying API call --> return simplified output for LLMs. The key idea is you, or your users, expose a set of actions via an oauth-like setup window, which you can then query and execute via a REST API. NLA offers both API Key and OAuth for signing NLA API requests. Server-side (API Key): for quickly getting started, testing, and production scenarios where LangChain will only use actions exposed in the developer's Zapier account (and will use the developer's connected accounts on Zapier.com) User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user's exposed actions and connected accounts on Zapier.com Attach NLA credentials via either an environment variable (`ZAPIER_NLA_OAUTH_ACCESS_TOKEN` or `ZAPIER_NLA_API_KEY`) or refer to the params argument in the API reference for `ZapierNLAWrapper`. Review [auth docs](https://nla.zapier.com/docs/authentication/) for more details. The example below demonstrates how to use the Zapier integration as an Agent: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { ZapierNLAWrapper } from "langchain/tools";import { initializeAgentExecutorWithOptions, ZapierToolKit,} from "langchain/agents";const model = new OpenAI({ temperature: 0 });const zapier = new ZapierNLAWrapper();const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);const executor = await initializeAgentExecutorWithOptions( toolkit.tools, model, { agentType: "zero-shot-react-description", verbose: true, });console.log("Loaded agent.");const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;console.log(`Executing with input "${input}"...`);const result = await executor.invoke({ input });console.log(`Got output ${result.output}`); #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [ZapierNLAWrapper](https://api.js.langchain.com/classes/langchain_tools.ZapierNLAWrapper.html) from `langchain/tools` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [ZapierToolKit](https://api.js.langchain.com/classes/langchain_agents.ZapierToolKit.html) from `langchain/agents` * * * #### Help us out by providing feedback on this documentation page: [ Previous WolframAlpha Tool ](/v0.1/docs/integrations/tools/wolframalpha/)[ Next Agents and toolkits ](/v0.1/docs/integrations/toolkits/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/toolkits/json/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) * [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/) * [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/) * [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/) * [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/) * [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * JSON Agent Toolkit JSON Agent Toolkit ================== This example shows how to load and use an agent with a JSON toolkit. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import * as fs from "fs";import * as yaml from "js-yaml";import { OpenAI } from "@langchain/openai";import { JsonSpec, JsonObject } from "langchain/tools";import { JsonToolkit, createJsonAgent } from "langchain/agents";export const run = async () => { let data: JsonObject; try { const yamlFile = fs.readFileSync("openai_openapi.yaml", "utf8"); data = yaml.load(yamlFile) as JsonObject; if (!data) { throw new Error("Failed to load OpenAPI spec"); } } catch (e) { console.error(e); return; } const toolkit = new JsonToolkit(new JsonSpec(data)); const model = new OpenAI({ temperature: 0 }); const executor = createJsonAgent(model, toolkit); const input = `What are the required parameters in the request body to the /completions endpoint?`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; * * * #### Help us out by providing feedback on this documentation page: [ Previous Connery Toolkit ](/v0.1/docs/integrations/toolkits/connery/)[ Next OpenAPI Agent Toolkit ](/v0.1/docs/integrations/toolkits/openapi/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/toolkits/connery/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) * [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/) * [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/) * [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/) * [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/) * [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * Connery Toolkit On this page Connery Toolkit =============== Using this toolkit, you can integrate Connery Actions into your LangChain agent. note If you want to use only one particular Connery Action in your agent, check out the [Connery Action Tool](/v0.1/docs/integrations/tools/connery/) documentation. What is Connery?[​](#what-is-connery "Direct link to What is Connery?") ----------------------------------------------------------------------- Connery is an open-source plugin infrastructure for AI. With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent. Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features. Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience. Learn more about Connery: * GitHub: [https://github.com/connery-io/connery](https://github.com/connery-io/connery) * Documentation: [https://docs.connery.io](https://docs.connery.io) Prerequisites[​](#prerequisites "Direct link to Prerequisites") --------------------------------------------------------------- To use Connery Actions in your LangChain agent, you need to do some preparation: 1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide. 2. Install all the plugins with the actions you want to use in your agent. 3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner. Example of using Connery Toolkit[​](#example-of-using-connery-toolkit "Direct link to Example of using Connery Toolkit") ------------------------------------------------------------------------------------------------------------------------ ### Setup[​](#setup "Direct link to Setup") To use the Connery Toolkit you need to install the following official peer dependency: * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). ### Usage[​](#usage "Direct link to Usage") In the example below, we create an agent that uses two Connery Actions to summarize a public webpage and send the summary by email: 1. **Summarize public webpage** action from the [Summarization](https://github.com/connery-io/summarization-plugin) plugin. 2. **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin. info You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/5485cb37-b73d-458f-8162-43639f2b49e1/r). import { ConneryService } from "@langchain/community/tools/connery";import { ConneryToolkit } from "@langchain/community/agents/toolkits/connery";import { ChatOpenAI } from "@langchain/openai";import { initializeAgentExecutorWithOptions } from "langchain/agents";// Specify your Connery Runner credentials.process.env.CONNERY_RUNNER_URL = "";process.env.CONNERY_RUNNER_API_KEY = "";// Specify OpenAI API key.process.env.OPENAI_API_KEY = "";// Specify your email address to receive the emails from examples below.const recepientEmail = "test@example.com";// Create a Connery Toolkit with all the available actions from the Connery Runner.const conneryService = new ConneryService();const conneryToolkit = await ConneryToolkit.createInstance(conneryService);// Use OpenAI Functions agent to execute the prompt using actions from the Connery Toolkit.const llm = new ChatOpenAI({ temperature: 0 });const agent = await initializeAgentExecutorWithOptions( conneryToolkit.tools, llm, { agentType: "openai-functions", verbose: true, });const result = await agent.invoke({ input: `Make a short summary of the webpage http://www.paulgraham.com/vb.html in three sentences ` + `and send it to ${recepientEmail}. Include the link to the webpage into the body of the email.`,});console.log(result.output); #### API Reference: * [ConneryService](https://api.js.langchain.com/classes/langchain_community_tools_connery.ConneryService.html) from `@langchain/community/tools/connery` * [ConneryToolkit](https://api.js.langchain.com/classes/langchain_community_agents_toolkits_connery.ConneryToolkit.html) from `@langchain/community/agents/toolkits/connery` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` note Connery Action is a structured tool, so you can only use it in the agents supporting structured tools. * * * #### Help us out by providing feedback on this documentation page: [ Previous Agents and toolkits ](/v0.1/docs/integrations/toolkits/)[ Next JSON Agent Toolkit ](/v0.1/docs/integrations/toolkits/json/) * [What is Connery?](#what-is-connery) * [Prerequisites](#prerequisites) * [Example of using Connery Toolkit](#example-of-using-connery-toolkit) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/toolkits/openapi/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) * [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/) * [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/) * [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/) * [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/) * [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * OpenAPI Agent Toolkit OpenAPI Agent Toolkit ===================== This example shows how to load and use an agent with a OpenAPI toolkit. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import * as fs from "fs";import * as yaml from "js-yaml";import { OpenAI } from "@langchain/openai";import { JsonSpec, JsonObject } from "langchain/tools";import { createOpenApiAgent, OpenApiToolkit } from "langchain/agents";export const run = async () => { let data: JsonObject; try { const yamlFile = fs.readFileSync("openai_openapi.yaml", "utf8"); data = yaml.load(yamlFile) as JsonObject; if (!data) { throw new Error("Failed to load OpenAPI spec"); } } catch (e) { console.error(e); return; } const headers = { "Content-Type": "application/json", Authorization: `Bearer ${process.env.OPENAI_API_KEY}`, }; const model = new OpenAI({ temperature: 0 }); const toolkit = new OpenApiToolkit(new JsonSpec(data), model, headers); const executor = createOpenApiAgent(model, toolkit); const input = `Make a POST request to openai /completions. The prompt should be 'tell me a joke.'`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; Disclaimer ⚠️ ============= This agent can make requests to external APIs. Use with caution, especially when granting access to users. Be aware that this agent could theoretically send requests with provided credentials or other sensitive data to unverified or potentially malicious URLs --although it should never in theory. Consider adding limitations to what actions can be performed via the agent, what APIs it can access, what headers can be passed, and more. In addition, consider implementing measures to validate URLs before sending requests, and to securely handle and protect sensitive data such as credentials. * * * #### Help us out by providing feedback on this documentation page: [ Previous JSON Agent Toolkit ](/v0.1/docs/integrations/toolkits/json/)[ Next AWS Step Functions Toolkit ](/v0.1/docs/integrations/toolkits/sfn_agent/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/toolkits/vectorstore/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) * [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/) * [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/) * [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/) * [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/) * [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * VectorStore Agent Toolkit VectorStore Agent Toolkit ========================= tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community This example shows how to load and use an agent with a vectorstore toolkit. import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";import { VectorStoreToolkit, createVectorStoreAgent, VectorStoreInfo,} from "langchain/agents";const model = new OpenAI({ temperature: 0 });/* Load in the file we want to do question answering over */const text = fs.readFileSync("state_of_the_union.txt", "utf8");/* Split the text into chunks using character, not token, size */const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);/* Create the vectorstore */const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());/* Create the agent */const vectorStoreInfo: VectorStoreInfo = { name: "state_of_union_address", description: "the most recent state of the Union address", vectorStore,};const toolkit = new VectorStoreToolkit(vectorStoreInfo, model);const agent = createVectorStoreAgent(model, toolkit);const input = "What did biden say about Ketanji Brown Jackson is the state of the union address?";console.log(`Executing: ${input}`);const result = await agent.invoke({ input });console.log(`Got output ${result.output}`);console.log( `Got intermediate steps ${JSON.stringify(result.intermediateSteps, null, 2)}`); #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter` * [VectorStoreToolkit](https://api.js.langchain.com/classes/langchain_agents.VectorStoreToolkit.html) from `langchain/agents` * [createVectorStoreAgent](https://api.js.langchain.com/functions/langchain_agents.createVectorStoreAgent.html) from `langchain/agents` * [VectorStoreInfo](https://api.js.langchain.com/interfaces/langchain_agents.VectorStoreInfo.html) from `langchain/agents` * * * #### Help us out by providing feedback on this documentation page: [ Previous SQL Agent Toolkit ](/v0.1/docs/integrations/toolkits/sql/)[ Next Chat Memory ](/v0.1/docs/integrations/chat_memory/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/toolkits/sfn_agent/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/) * [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/) * [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/) * [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/) * [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/) * [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * AWS Step Functions Toolkit AWS Step Functions Toolkit ========================== **AWS Step Functions** are a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. By including a `AWSSfn` tool in the list of tools provided to an Agent, you can grant your Agent the ability to invoke async workflows running in your AWS Cloud. When an Agent uses the `AWSSfn` tool, it will provide an argument of type `string` which will in turn be passed into one of the supported actions this tool supports. The supported actions are: `StartExecution`, `DescribeExecution`, and `SendTaskSuccess`. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the Node AWS Step Functions SDK: * npm * Yarn * pnpm npm install @aws-sdk/client-sfn yarn add @aws-sdk/client-sfn pnpm add @aws-sdk/client-sfn Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community ### Note about credentials:[​](#note-about-credentials "Direct link to Note about credentials:") * If you have not run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via the AWS CLI, the `region`, `accessKeyId`, and `secretAccessKey` must be provided to the AWSSfn constructor. * The IAM role corresponding to those credentials must have permission to invoke the Step Function. import { OpenAI } from "@langchain/openai";import { AWSSfnToolkit } from "@langchain/community/agents/toolkits/aws_sfn";import { createAWSSfnAgent } from "langchain/agents/toolkits/aws_sfn";const _EXAMPLE_STATE_MACHINE_ASL = `{ "Comment": "A simple example of the Amazon States Language to define a state machine for new client onboarding.", "StartAt": "OnboardNewClient", "States": { "OnboardNewClient": { "Type": "Pass", "Result": "Client onboarded!", "End": true } }}`;/** * This example uses a deployed AWS Step Function state machine with the above Amazon State Language (ASL) definition. * You can test by provisioning a state machine using the above ASL within your AWS environment, or you can use a tool like LocalStack * to mock AWS services locally. See https://localstack.cloud/ for more information. */export const run = async () => { const model = new OpenAI({ temperature: 0 }); const toolkit = new AWSSfnToolkit({ name: "onboard-new-client-workflow", description: "Onboard new client workflow. Can also be used to get status of any excuting workflow or state machine.", stateMachineArn: "arn:aws:states:us-east-1:1234567890:stateMachine:my-state-machine", // Update with your state machine ARN accordingly region: "<your Sfn's region>", accessKeyId: "<your access key id>", secretAccessKey: "<your secret access key>", }); const executor = createAWSSfnAgent(model, toolkit); const input = `Onboard john doe (john@example.com) as a new client.`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` );}; #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [AWSSfnToolkit](https://api.js.langchain.com/classes/langchain_community_agents_toolkits_aws_sfn.AWSSfnToolkit.html) from `@langchain/community/agents/toolkits/aws_sfn` * [createAWSSfnAgent](https://api.js.langchain.com/functions/langchain_agents_toolkits_aws_sfn.createAWSSfnAgent.html) from `langchain/agents/toolkits/aws_sfn` * * * #### Help us out by providing feedback on this documentation page: [ Previous OpenAPI Agent Toolkit ](/v0.1/docs/integrations/toolkits/openapi/)[ Next SQL Agent Toolkit ](/v0.1/docs/integrations/toolkits/sql/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/astradb/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Astra DB Chat Memory Astra DB Chat Memory ==================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for Astra DB. Setup[​](#setup "Direct link to Setup") --------------------------------------- You need to install the Astra DB TS client: * npm * Yarn * pnpm npm install @datastax/astra-db-ts yarn add @datastax/astra-db-ts pnpm add @datastax/astra-db-ts tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Configuration and Initalization[​](#configuration-and-initalization "Direct link to Configuration and Initalization") --------------------------------------------------------------------------------------------------------------------- There are two ways to inialize your `AstraDBChatMessageHistory` If you already have an instance of the `AstraDB` client defined you can connect to your collection and initialize an instance of the `ChatMessageHistory` using the constuctor. const client = (client = new AstraDB( process.env.ASTRA_DB_APPLICATION_TOKEN, process.env.ASTRA_DB_ENDPOINT, process.env.ASTRA_DB_NAMESPACE));const collection = await client.collection("YOUR_COLLECTION_NAME");const chatHistory = new AstraDBChatMessageHistory({ collection, sessionId: "YOUR_SESSION_ID",}); If you don't already have an instance of an `AstraDB` client you can use the `initialize` method. const chatHistory = await AstraDBChatMessageHistory.initialize({ token: process.env.ASTRA_DB_APPLICATION_TOKEN ?? "token", endpoint: process.env.ASTRA_DB_ENDPOINT ?? "endpoint", namespace: process.env.ASTRA_DB_NAMESPACE, collectionName: "YOUR_COLLECTION_NAME", sessionId: "YOUR_SESSION_ID",}); Usage[​](#usage "Direct link to Usage") --------------------------------------- Tip Your collection must already exist import { RunnableWithMessageHistory } from "@langchain/core/runnables";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatOpenAI } from "@langchain/openai";import { AstraDBChatMessageHistory } from "@langchain/community/stores/message/astradb";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const chain = prompt.pipe(model).pipe(new StringOutputParser());const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, inputMessagesKey: "input", historyMessagesKey: "chat_history", getMessageHistory: async (sessionId) => { const chatHistory = await AstraDBChatMessageHistory.initialize({ token: process.env.ASTRA_DB_APPLICATION_TOKEN as string, endpoint: process.env.ASTRA_DB_ENDPOINT as string, namespace: process.env.ASTRA_DB_NAMESPACE, collectionName: "YOUR_COLLECTION_NAME", sessionId, }); return chatHistory; },});const res1 = await chainWithHistory.invoke( { input: "Hi! I'm Jim.", }, { configurable: { sessionId: "langchain-test-session" } });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chainWithHistory.invoke( { input: "What did I just say my name was?" }, { configurable: { sessionId: "langchain-test-session" } });console.log({ res2 });/*{ res2: { text: "You said your name was Jim." }}*/ #### API Reference: * [RunnableWithMessageHistory](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableWithMessageHistory.html) from `@langchain/core/runnables` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [AstraDBChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_astradb.AstraDBChatMessageHistory.html) from `@langchain/community/stores/message/astradb` * * * #### Help us out by providing feedback on this documentation page: [ Previous Chat Memory ](/v0.1/docs/integrations/chat_memory/)[ Next Cassandra Chat Memory ](/v0.1/docs/integrations/chat_memory/cassandra/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/cassandra/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Cassandra Chat Memory Cassandra Chat Memory ===================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a Cassandra cluster. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, install the Cassandra Node.js driver: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install cassandra-driver @langchain/openai @langchain/community yarn add cassandra-driver @langchain/openai @langchain/community pnpm add cassandra-driver @langchain/openai @langchain/community Depending on your database providers, the specifics of how to connect to the database will vary. We will create a document `configConnection` which will be used as part of the vector store configuration. ### Apache Cassandra®[​](#apache-cassandra "Direct link to Apache Cassandra®") const configConnection = { contactPoints: ['h1', 'h2'], localDataCenter: 'datacenter1', credentials: { username: <...> as string, password: <...> as string, },}; ### Astra DB[​](#astra-db "Direct link to Astra DB") Astra DB is a cloud-native Cassandra-as-a-Service platform. 1. Create an [Astra DB account](https://astra.datastax.com/register). 2. Create a [vector enabled database](https://astra.datastax.com/createDatabase). 3. Create a [token](https://docs.datastax.com/en/astra/docs/manage-application-tokens.html) for your database. const configConnection = { serviceProviderArgs: { astra: { token: <...> as string, endpoint: <...> as string, }, },}; Instead of `endpoint:`, you many provide property `datacenterID:` and optionally `regionName:`. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { BufferMemory } from "langchain/memory";import { CassandraChatMessageHistory } from "@langchain/community/stores/message/cassandra";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";// The example below uses Astra DB, but you can use any Cassandra connectionconst configConnection = { serviceProviderArgs: { astra: { token: "<your Astra Token>" as string, endpoint: "<your Astra Endpoint>" as string, }, },};const memory = new BufferMemory({ chatHistory: new CassandraChatMessageHistory({ ...configConnection, keyspace: "langchain", table: "message_history", sessionId: "<some unique session identifier>", }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jonathan." });console.log({ res1 });/*{ res1: { text: "Hello Jonathan! How can I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jonathan." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [CassandraChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_cassandra.CassandraChatMessageHistory.html) from `@langchain/community/stores/message/cassandra` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * * * #### Help us out by providing feedback on this documentation page: [ Previous Astra DB Chat Memory ](/v0.1/docs/integrations/chat_memory/astradb/)[ Next Cloudflare D1-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/cloudflare_d1/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Cloudflare D1-Backed Chat Memory Cloudflare D1-Backed Chat Memory ================================ info This integration is only supported in Cloudflare Workers. For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a Cloudflare D1 instance. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the LangChain Cloudflare integration package. For the below example, we also use Anthropic, but you can use any model you'd like: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/cloudflare @langchain/anthropic yarn add @langchain/cloudflare @langchain/anthropic pnpm add @langchain/cloudflare @langchain/anthropic Set up a D1 instance for your worker by following [the official documentation](https://developers.cloudflare.com/d1/). Your project's `wrangler.toml` file should look something like this: name = "YOUR_PROJECT_NAME"main = "src/index.ts"compatibility_date = "2024-01-10"[vars]ANTHROPIC_API_KEY = "YOUR_ANTHROPIC_KEY"[[d1_databases]]binding = "DB" # available in your Worker as env.DBdatabase_name = "YOUR_D1_DB_NAME"database_id = "YOUR_D1_DB_ID" Usage[​](#usage "Direct link to Usage") --------------------------------------- You can then use D1 to store your history as follows: import type { D1Database } from "@cloudflare/workers-types";import { BufferMemory } from "langchain/memory";import { CloudflareD1MessageHistory } from "@langchain/cloudflare";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatAnthropic } from "@langchain/anthropic";export interface Env { DB: D1Database; ANTHROPIC_API_KEY: string;}export default { async fetch(request: Request, env: Env): Promise<Response> { try { const { searchParams } = new URL(request.url); const input = searchParams.get("input"); if (!input) { throw new Error(`Missing "input" parameter`); } const memory = new BufferMemory({ returnMessages: true, chatHistory: new CloudflareD1MessageHistory({ tableName: "stored_message", sessionId: "example", database: env.DB, }), }); const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful chatbot"], new MessagesPlaceholder("history"), ["human", "{input}"], ]); const model = new ChatAnthropic({ apiKey: env.ANTHROPIC_API_KEY, }); const chain = RunnableSequence.from([ { input: (initialInput) => initialInput.input, memory: () => memory.loadMemoryVariables({}), }, { input: (previousOutput) => previousOutput.input, history: (previousOutput) => previousOutput.memory.history, }, prompt, model, new StringOutputParser(), ]); const chainInput = { input }; const res = await chain.invoke(chainInput); await memory.saveContext(chainInput, { output: res, }); return new Response(JSON.stringify(res), { headers: { "content-type": "application/json" }, }); } catch (err: any) { console.log(err.message); return new Response(err.message, { status: 500 }); } },}; #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [CloudflareD1MessageHistory](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareD1MessageHistory.html) from `@langchain/cloudflare` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * * * #### Help us out by providing feedback on this documentation page: [ Previous Cassandra Chat Memory ](/v0.1/docs/integrations/chat_memory/cassandra/)[ Next Convex Chat Memory ](/v0.1/docs/integrations/chat_memory/convex/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/convex/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Convex Chat Memory Convex Chat Memory ================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for [Convex](https://convex.dev/). Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Create project[​](#create-project "Direct link to Create project") Get a working [Convex](https://docs.convex.dev/) project set up, for example by using: npm create convex@latest ### Add database accessors[​](#add-database-accessors "Direct link to Add database accessors") Add query and mutation helpers to `convex/langchain/db.ts`: convex/langchain/db.ts export * from "langchain/util/convex"; ### Configure your schema[​](#configure-your-schema "Direct link to Configure your schema") Set up your schema (for indexing): convex/schema.ts import { defineSchema, defineTable } from "convex/server";import { v } from "convex/values";export default defineSchema({ messages: defineTable({ sessionId: v.string(), message: v.object({ type: v.string(), data: v.object({ content: v.string(), role: v.optional(v.string()), name: v.optional(v.string()), additional_kwargs: v.optional(v.any()), }), }), }).index("bySessionId", ["sessionId"]),}); Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in Convex must have a unique session id. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community convex/myActions.ts "use node";import { v } from "convex/values";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { ConvexChatMessageHistory } from "@langchain/community/stores/message/convex";import { action } from "./_generated/server.js";export const ask = action({ args: { sessionId: v.string() }, handler: async (ctx, args) => { // pass in a sessionId string const { sessionId } = args; const memory = new BufferMemory({ chatHistory: new ConvexChatMessageHistory({ sessionId, ctx }), }); const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, }); const chain = new ConversationChain({ llm: model, memory }); const res1 = await chain.invoke({ input: "Hi! I'm Jim." }); console.log({ res1 }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" } } */ const res2 = await chain.invoke({ input: "What did I just say my name was?", }); console.log({ res2 }); /* { res2: { text: "You said your name was Jim." } } */ // See the chat history in the Convex database console.log(await memory.chatHistory.getMessages()); // clear chat history await memory.chatHistory.clear(); },}); #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [ConvexChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_convex.ConvexChatMessageHistory.html) from `@langchain/community/stores/message/convex` * * * #### Help us out by providing feedback on this documentation page: [ Previous Cloudflare D1-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/cloudflare_d1/)[ Next DynamoDB-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/dynamodb/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/firestore/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Firestore Chat Memory Firestore Chat Memory ===================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a firestore. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, install the Firebase admin package in your project: * npm * Yarn * pnpm npm install firebase-admin yarn add firebase-admin pnpm add firebase-admin tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Visit the `Project Settings` page from your Firebase project and select the `Service accounts` tab. Inside the `Service accounts` tab, click the `Generate new private key` button inside the `Firebase Admin SDK` section to download a JSON file containing your service account's credentials. Using the downloaded JSON file, pass in the `projectId`, `privateKey`, and `clientEmail` to the `config` object of the `FirestoreChatMessageHistory` class, like shown below: import { FirestoreChatMessageHistory } from "@langchain/community/stores/message/firestore";import admin from "firebase-admin";const messageHistory = new FirestoreChatMessageHistory({ collections: ["chats"], docs: ["user-id"], sessionId: "user-id", userId: "a@example.com", config: { projectId: "YOUR-PROJECT-ID", credential: admin.credential.cert({ projectId: "YOUR-PROJECT-ID", privateKey: "-----BEGIN PRIVATE KEY-----\nCHANGE-ME\n-----END PRIVATE KEY-----\n", clientEmail: "CHANGE-ME@CHANGE-ME-TOO.iam.gserviceaccount.com", }), },}); Here, the `collections` field should match the names and ordering of the `collections` in your database. The same goes for `docs`, it should match the names and ordering of the `docs` in your database. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "@langchain/community/stores/message/firestore";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import admin from "firebase-admin";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collections: ["langchain"], docs: ["lc-example"], sessionId: "lc-example-id", userId: "a@example.com", config: { projectId: "YOUR-PROJECT-ID", credential: admin.credential.cert({ projectId: "YOUR-PROJECT-ID", privateKey: "-----BEGIN PRIVATE KEY-----\nnCHANGE-ME\n-----END PRIVATE KEY-----\n", clientEmail: "CHANGE-ME@CHANGE-ME-TOO.iam.gserviceaccount.com", }), }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" } }*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." } }*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [FirestoreChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_firestore.FirestoreChatMessageHistory.html) from `@langchain/community/stores/message/firestore` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` ### Nested Collections[​](#nested-collections "Direct link to Nested Collections") The `FirestoreChatMessageHistory` class supports nested collections, and dynamic collection/doc names. The example below shows how to add and retrieve messages from a database with the following structure: /chats/{chat-id}/bots/{bot-id}/messages/{message-id} import { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "@langchain/community/stores/message/firestore";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import admin from "firebase-admin";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collections: ["chats", "bots"], docs: ["chat-id", "bot-id"], sessionId: "user-id", userId: "a@example.com", config: { projectId: "YOUR-PROJECT-ID", credential: admin.credential.cert({ projectId: "YOUR-PROJECT-ID", privateKey: "-----BEGIN PRIVATE KEY-----\nnCHANGE-ME\n-----END PRIVATE KEY-----\n", clientEmail: "CHANGE-ME@CHANGE-ME-TOO.iam.gserviceaccount.com", }), }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { response: 'Hello Jim! How can I assist you today?' } }*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res2: { response: 'You just said that your name is Jim.' } }*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [FirestoreChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_firestore.FirestoreChatMessageHistory.html) from `@langchain/community/stores/message/firestore` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` Firestore Rules[​](#firestore-rules "Direct link to Firestore Rules") --------------------------------------------------------------------- If your collection name is "chathistory," you can configure Firestore rules as follows. match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } * * * #### Help us out by providing feedback on this documentation page: [ Previous DynamoDB-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/dynamodb/)[ Next IPFS Datastore Chat Memory ](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/ipfs_datastore/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * IPFS Datastore Chat Memory IPFS Datastore Chat Memory ========================== For a storage backend you can use the IPFS Datastore Chat Memory to wrap an IPFS Datastore allowing you to use any IPFS compatible datastore. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, install the integration dependencies: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install cborg interface-datastore it-all @langchain/community yarn add cborg interface-datastore it-all @langchain/community pnpm add cborg interface-datastore it-all @langchain/community Now you can install and use an IPFS Datastore of your choice. Here are some options: * [datastore-core](https://github.com/ipfs/js-stores/blob/main/packages/datastore-core) Datastore in-memory implementation. * [datastore-fs](https://github.com/ipfs/js-stores/blob/main/packages/datastore-fs) Datastore implementation with file system backend. * [datastore-idb](https://github.com/ipfs/js-stores/blob/main/packages/datastore-idb) Datastore implementation with IndexedDB backend. * [datastore-level](https://github.com/ipfs/js-stores/blob/main/packages/datastore-level) Datastore implementation with level(up|down) backend * [datastore-s3](https://github.com/ipfs/js-stores/blob/main/packages/datastore-s3) Datastore implementation backed by s3. Usage[​](#usage "Direct link to Usage") --------------------------------------- // Replace FsDatastore with the IPFS Datastore of your choice.import { FsDatastore } from "datastore-fs";import { IPFSDatastoreChatMessageHistory } from "@langchain/community/stores/message/ipfs_datastore";const datastore = new FsDatastore("path/to/store");const sessionId = "my-session";const history = new IPFSDatastoreChatMessageHistory({ datastore, sessionId }); * * * #### Help us out by providing feedback on this documentation page: [ Previous Firestore Chat Memory ](/v0.1/docs/integrations/chat_memory/firestore/)[ Next Momento-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/momento/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/momento/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Momento-Backed Chat Memory Momento-Backed Chat Memory ========================== For distributed, serverless persistence across chat sessions, you can swap in a [Momento](https://gomomento.com/)\-backed chat message history. Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to install the [Momento Client Library](https://github.com/momentohq/client-sdk-javascript) in your project. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package. To install for **Node.js**: * npm * Yarn * pnpm npm install @gomomento/sdk yarn add @gomomento/sdk pnpm add @gomomento/sdk To install for **browser/edge workers**: * npm * Yarn * pnpm npm install @gomomento/sdk-web yarn add @gomomento/sdk-web pnpm add @gomomento/sdk-web tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community You will also need an API key from [Momento](https://gomomento.com/). You can sign up for a free account [here](https://console.gomomento.com/). Usage[​](#usage "Direct link to Usage") --------------------------------------- To distinguish one chat history session from another, we need a unique `sessionId`. You may also provide an optional `sessionTtl` to make sessions expire after a given number of seconds. import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk"; // `from "gomomento/sdk-web";` for browser/edgeimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "@langchain/community/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages()); #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [MomentoChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_momento.MomentoChatMessageHistory.html) from `@langchain/community/stores/message/momento` * * * #### Help us out by providing feedback on this documentation page: [ Previous IPFS Datastore Chat Memory ](/v0.1/docs/integrations/chat_memory/ipfs_datastore/)[ Next MongoDB Chat Memory ](/v0.1/docs/integrations/chat_memory/mongodb/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/mongodb/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * MongoDB Chat Memory MongoDB Chat Memory =================== Compatibility Only available on Node.js. You can still create API routes that use MongoDB with Next.js by setting the `runtime` variable to `nodejs` like so: export const runtime = "nodejs"; You can read more about Edge runtimes in the Next.js documentation [here](https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes). For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a MongoDB instance. Setup[​](#setup "Direct link to Setup") --------------------------------------- You need to install Node MongoDB SDK in your project: * npm * Yarn * pnpm npm install -S mongodb yarn add mongodb pnpm add mongodb tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community You will also need a MongoDB instance to connect to. Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in MongoDB must have a unique session id. import { MongoClient, ObjectId } from "mongodb";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { MongoDBChatMessageHistory } from "@langchain/mongodb";const client = new MongoClient(process.env.MONGODB_ATLAS_URI || "", { driverInfo: { name: "langchainjs" },});await client.connect();const collection = client.db("langchain").collection("memory");// generate a new sessionId stringconst sessionId = new ObjectId().toString();const memory = new BufferMemory({ chatHistory: new MongoDBChatMessageHistory({ collection, sessionId, }),});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" } } */const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/* { res1: { text: "You said your name was Jim." } } */// See the chat history in the MongoDbconsole.log(await memory.chatHistory.getMessages());// clear chat historyawait memory.chatHistory.clear(); #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [MongoDBChatMessageHistory](https://api.js.langchain.com/classes/langchain_mongodb.MongoDBChatMessageHistory.html) from `@langchain/mongodb` * * * #### Help us out by providing feedback on this documentation page: [ Previous Momento-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/momento/)[ Next Motörhead Memory ](/v0.1/docs/integrations/chat_memory/motorhead_memory/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/motorhead_memory/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Motörhead Memory Motörhead Memory ================ [Motörhead](https://github.com/getmetal/motorhead) is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. Setup[​](#setup "Direct link to Setup") --------------------------------------- See instructions at [Motörhead](https://github.com/getmetal/motorhead) for running the server locally, or [https://getmetal.io](https://getmetal.io) to get API keys for the hosted version. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { MotorheadMemory } from "@langchain/community/memory/motorhead_memory";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [MotorheadMemory](https://api.js.langchain.com/classes/langchain_community_memory_motorhead_memory.MotorheadMemory.html) from `@langchain/community/memory/motorhead_memory` * * * #### Help us out by providing feedback on this documentation page: [ Previous MongoDB Chat Memory ](/v0.1/docs/integrations/chat_memory/mongodb/)[ Next PlanetScale Chat Memory ](/v0.1/docs/integrations/chat_memory/planetscale/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/planetscale/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * PlanetScale Chat Memory PlanetScale Chat Memory ======================= Because PlanetScale works via a REST API, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and other Serverless environments. For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for an PlanetScale [Database](https://planetscale.com/) instance. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to install [@planetscale/database](https://github.com/planetscale/database-js) in your project: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @planetscale/database @langchain/community yarn add @langchain/openai @planetscale/database @langchain/community pnpm add @langchain/openai @planetscale/database @langchain/community You will also need an PlanetScale Account and a database to connect to. See instructions on [PlanetScale Docs](https://planetscale.com/docs) on how to create a HTTP client. Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in PlanetScale database must have a unique id. The `config` parameter is passed directly into the `new Client()` constructor of [@planetscale/database](https://planetscale.com/docs/tutorials/planetscale-serverless-driver), and takes all the same arguments. import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "@langchain/community/stores/message/planetscale";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [PlanetScaleChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_planetscale.PlanetScaleChatMessageHistory.html) from `@langchain/community/stores/message/planetscale` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ------------------------------------------------------------------ You can also directly pass in a previously created [@planetscale/database](https://planetscale.com/docs/tutorials/planetscale-serverless-driver) client instance: import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "@langchain/community/stores/message/planetscale";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [PlanetScaleChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_planetscale.PlanetScaleChatMessageHistory.html) from `@langchain/community/stores/message/planetscale` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * * * #### Help us out by providing feedback on this documentation page: [ Previous Motörhead Memory ](/v0.1/docs/integrations/chat_memory/motorhead_memory/)[ Next Postgres Chat Memory ](/v0.1/docs/integrations/chat_memory/postgres/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/postgres/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Postgres Chat Memory Postgres Chat Memory ==================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` for a [Postgres](https://www.postgresql.org/) Database. Setup[​](#setup "Direct link to Setup") --------------------------------------- First install the [node-postgres](https://node-postgres.com/) package: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community pg yarn add @langchain/openai @langchain/community pg pnpm add @langchain/openai @langchain/community pg Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session is stored in a Postgres database and requires a session id. The connection to postgres is handled through a pool. You can either pass an instance of a pool via the `pool` parameter or pass a pool config via the `poolConfig` parameter. See [pg-node docs on pools](https://node-postgres.com/apis/pool) for more information. A provided pool takes precedence, thus if both a pool instance and a pool config are passed, only the pool will be used. import pg from "pg";import { PostgresChatMessageHistory } from "@langchain/community/stores/message/postgres";import { ChatOpenAI } from "@langchain/openai";import { RunnableWithMessageHistory } from "@langchain/core/runnables";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const poolConfig = { host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api",};const pool = new pg.Pool(poolConfig);const model = new ChatOpenAI();const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const chain = prompt.pipe(model).pipe(new StringOutputParser());const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, inputMessagesKey: "input", historyMessagesKey: "chat_history", getMessageHistory: async (sessionId) => { const chatHistory = new PostgresChatMessageHistory({ sessionId, pool, // Can also pass `poolConfig` to initialize the pool internally, // but easier to call `.end()` at the end later. }); return chatHistory; },});const res1 = await chainWithHistory.invoke( { input: "Hi! I'm MJDeligan.", }, { configurable: { sessionId: "langchain-test-session" } });console.log(res1);/* "Hello MJDeligan! It's nice to meet you. My name is AI. How may I assist you today?"*/const res2 = await chainWithHistory.invoke( { input: "What did I just say my name was?" }, { configurable: { sessionId: "langchain-test-session" } });console.log(res2);/* "You said your name was MJDeligan."*/// If you provided a pool config you should close the created pool when you are doneawait pool.end(); #### API Reference: * [PostgresChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_postgres.PostgresChatMessageHistory.html) from `@langchain/community/stores/message/postgres` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [RunnableWithMessageHistory](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableWithMessageHistory.html) from `@langchain/core/runnables` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Help us out by providing feedback on this documentation page: [ Previous PlanetScale Chat Memory ](/v0.1/docs/integrations/chat_memory/planetscale/)[ Next Redis-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/redis/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/redis/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Redis-Backed Chat Memory Redis-Backed Chat Memory ======================== For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a [Redis](https://redis.io/) instance. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to install [node-redis](https://github.com/redis/node-redis) in your project: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai redis @langchain/community yarn add @langchain/openai redis @langchain/community pnpm add @langchain/openai redis @langchain/community You will also need a Redis instance to connect to. See instructions on [the official Redis website](https://redis.io/docs/getting-started/) for running the server locally. Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in Redis must have a unique id. You can provide an optional `sessionTTL` to make sessions expire after a give number of seconds. The `config` parameter is passed directly into the `createClient` method of [node-redis](https://github.com/redis/node-redis), and takes all the same arguments. import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "@langchain/community/stores/message/ioredis";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [RedisChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_ioredis.RedisChatMessageHistory.html) from `@langchain/community/stores/message/ioredis` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ------------------------------------------------------------------ You can also directly pass in a previously created [node-redis](https://github.com/redis/node-redis) client instance: import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "@langchain/community/stores/message/ioredis";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [RedisChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_ioredis.RedisChatMessageHistory.html) from `@langchain/community/stores/message/ioredis` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` ### Redis Sentinel Support[​](#redis-sentinel-support "Direct link to Redis Sentinel Support") You can enable a Redis Sentinel backed cache using [ioredis](https://github.com/redis/ioredis) This will require the installation of [ioredis](https://github.com/redis/ioredis) in your project. * npm * Yarn * pnpm npm install ioredis yarn add ioredis pnpm add ioredis import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "@langchain/community/stores/message/ioredis";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [RedisChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_ioredis.RedisChatMessageHistory.html) from `@langchain/community/stores/message/ioredis` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * * * #### Help us out by providing feedback on this documentation page: [ Previous Postgres Chat Memory ](/v0.1/docs/integrations/chat_memory/postgres/)[ Next Upstash Redis-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/upstash_redis/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/upstash_redis/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Upstash Redis-Backed Chat Memory Upstash Redis-Backed Chat Memory ================================ Because Upstash Redis works via a REST API, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and other Serverless environments. Based on Redis-Backed Chat Memory. For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for an Upstash [Redis](https://redis.io/) instance. Setup[​](#setup "Direct link to Setup") --------------------------------------- You will need to install [@upstash/redis](https://github.com/upstash/upstash-redis) in your project: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @upstash/redis @langchain/community yarn add @langchain/openai @upstash/redis @langchain/community pnpm add @langchain/openai @upstash/redis @langchain/community You will also need an Upstash Account and a Redis database to connect to. See instructions on [Upstash Docs](https://docs.upstash.com/redis) on how to create a HTTP client. Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in Redis must have a unique id. You can provide an optional `sessionTTL` to make sessions expire after a give number of seconds. The `config` parameter is passed directly into the `new Redis()` constructor of [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview), and takes all the same arguments. import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "@langchain/community/stores/message/upstash_redis";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [UpstashRedisChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_upstash_redis.UpstashRedisChatMessageHistory.html) from `@langchain/community/stores/message/upstash_redis` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` Advanced Usage[​](#advanced-usage "Direct link to Advanced Usage") ------------------------------------------------------------------ You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance: import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "@langchain/community/stores/message/upstash_redis";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [UpstashRedisChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_upstash_redis.UpstashRedisChatMessageHistory.html) from `@langchain/community/stores/message/upstash_redis` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * * * #### Help us out by providing feedback on this documentation page: [ Previous Redis-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/redis/)[ Next Xata Chat Memory ](/v0.1/docs/integrations/chat_memory/xata/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/xata/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Xata Chat Memory On this page Xata Chat Memory ================ [Xata](https://xata.io) is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data. With the `XataChatMessageHistory` class, you can use Xata databases for longer-term persistence of chat sessions. Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and any other Serverless environment. Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install the Xata CLI[​](#install-the-xata-cli "Direct link to Install the Xata CLI") npm install @xata.io/cli -g ### Create a database to be used as a vector store[​](#create-a-database-to-be-used-as-a-vector-store "Direct link to Create a database to be used as a vector store") In the [Xata UI](https://app.xata.io) create a new database. You can name it whatever you want, but for this example we'll use `langchain`. When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched. ### Initialize the project[​](#initialize-the-project "Direct link to Initialize the project") In your project, run: xata init and then choose the database you created above. This will also generate a `xata.ts` or `xata.js` file that defines the client you can use to interact with the database. See the [Xata getting started docs](https://xata.io/docs/getting-started/installation) for more details on using the Xata JavaScript/TypeScript SDK. Usage[​](#usage "Direct link to Usage") --------------------------------------- Each chat history session stored in Xata database must have a unique id. In this example, the `getXataClient()` function is used to create a new Xata client based on the environment variables. However, we recommend using the code generated by the `xata init` command, in which case you only need to import the `getXataClient()` function from the generated `xata.ts` file. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "@langchain/community/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [XataChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_xata.XataChatMessageHistory.html) from `@langchain/community/stores/message/xata` ### With pre-created table[​](#with-pre-created-table "Direct link to With pre-created table") If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass `createTable: false` to the constructor. The table must have the following columns: * `sessionId` of type `String` * `type` of type `String` * `role` of type `String` * `content` of type `Text` * `name` of type `String` * `additionalKwargs` of type `Text` import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "@langchain/community/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ #### API Reference: * [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [XataChatMessageHistory](https://api.js.langchain.com/classes/langchain_community_stores_message_xata.XataChatMessageHistory.html) from `@langchain/community/stores/message/xata` * * * #### Help us out by providing feedback on this documentation page: [ Previous Upstash Redis-Backed Chat Memory ](/v0.1/docs/integrations/chat_memory/upstash_redis/)[ Next Zep Memory ](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Setup](#setup) * [Install the Xata CLI](#install-the-xata-cli) * [Create a database to be used as a vector store](#create-a-database-to-be-used-as-a-vector-store) * [Initialize the project](#initialize-the-project) * [Usage](#usage) * [With pre-created table](#with-pre-created-table) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/chat_memory/zep_memory/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Astra DB Chat Memory](/v0.1/docs/integrations/chat_memory/astradb/) * [Cassandra Chat Memory](/v0.1/docs/integrations/chat_memory/cassandra/) * [Cloudflare D1-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/cloudflare_d1/) * [Convex Chat Memory](/v0.1/docs/integrations/chat_memory/convex/) * [DynamoDB-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/dynamodb/) * [Firestore Chat Memory](/v0.1/docs/integrations/chat_memory/firestore/) * [IPFS Datastore Chat Memory](/v0.1/docs/integrations/chat_memory/ipfs_datastore/) * [Momento-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/momento/) * [MongoDB Chat Memory](/v0.1/docs/integrations/chat_memory/mongodb/) * [Motörhead Memory](/v0.1/docs/integrations/chat_memory/motorhead_memory/) * [PlanetScale Chat Memory](/v0.1/docs/integrations/chat_memory/planetscale/) * [Postgres Chat Memory](/v0.1/docs/integrations/chat_memory/postgres/) * [Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/redis/) * [Upstash Redis-Backed Chat Memory](/v0.1/docs/integrations/chat_memory/upstash_redis/) * [Xata Chat Memory](/v0.1/docs/integrations/chat_memory/xata/) * [Zep Memory](/v0.1/docs/integrations/chat_memory/zep_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * Zep Memory Zep Memory ========== > Recall, understand, and extract data from chat histories. Power personalized AI experiences. > [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps. With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, while also reducing hallucinations, latency, and cost. How Zep works[​](#how-zep-works "Direct link to How Zep works") --------------------------------------------------------------- Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories. It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations. Zep does all of this asynchronously, ensuring these operations don't impact your user's chat experience. Data is persisted to database, allowing you to scale out when growth demands. Zep also provides a simple, easy to use abstraction for document vector search called Document Collections. This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database. Zep allows you to be more intentional about constructing your prompt: * automatically adding a few recent messages, with the number customized for your app; * a summary of recent conversations prior to the messages above; * and/or contextually relevant summaries or messages surfaced from the entire chat session. * and/or relevant Business data from Zep Document Collections. What is Zep Cloud?[​](#what-is-zep-cloud "Direct link to What is Zep Cloud?") ----------------------------------------------------------------------------- [Zep Cloud](http://www.getzep.com) is a managed service with Zep Open Source at its core. In addition to Zep Open Source's memory management features, Zep Cloud offers: * **Fact Extraction**: Automatically build fact tables from conversations, without having to define a data schema upfront. * **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events. * **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task. > Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Message History Example](https://help.getzep.com/langchain/examples/messagehistory-example) Setup[​](#setup "Direct link to Setup") --------------------------------------- See the instructions from [Zep](https://github.com/getzep/zep) for running the server locally or through an automated hosting provider. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ChatOpenAI } from "@langchain/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "@langchain/community/memory/zep";import { randomUUID } from "crypto";const sessionId = randomUUID(); // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here. See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.invoke({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({})); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains` * [ZepMemory](https://api.js.langchain.com/classes/langchain_community_memory_zep.ZepMemory.html) from `@langchain/community/memory/zep` * * * #### Help us out by providing feedback on this documentation page: [ Previous Xata Chat Memory ](/v0.1/docs/integrations/chat_memory/xata/)[ Next Stores ](/v0.1/docs/integrations/stores/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/cassandra_storage/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * Cassandra KV On this page Cassandra KV ============ This example demonstrates how to setup chat history storage using the `CassandraKVStore` `BaseStore` integration. Note there is a `CassandraChatMessageHistory` integration which may be easier to use for chat history storage; the `CassandraKVStore` is useful if you want a more general-purpose key-value store with prefixable keys. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install cassandra-driver yarn add cassandra-driver pnpm add cassandra-driver Depending on your database providers, the specifics of how to connect to the database will vary. We will create a document `configConnection` which will be used as part of the store configuration. ### Apache Cassandra®[​](#apache-cassandra "Direct link to Apache Cassandra®") Storage Attached Indexes (used by `yieldKeys`) are supported in [Apache Cassandra® 5.0](https://cassandra.apache.org/_/blog/Apache-Cassandra-5.0-Features-Storage-Attached-Indexes.html) and above. You can use a standard connection document, for example: const configConnection = { contactPoints: ['h1', 'h2'], localDataCenter: 'datacenter1', credentials: { username: <...> as string, password: <...> as string, },}; ### Astra DB[​](#astra-db "Direct link to Astra DB") Astra DB is a cloud-native Cassandra-as-a-Service platform. 1. Create an [Astra DB account](https://astra.datastax.com/register). 2. Create a [vector enabled database](https://astra.datastax.com/createDatabase). 3. Create a [token](https://docs.datastax.com/en/astra/docs/manage-application-tokens.html) for your database. const configConnection = { serviceProviderArgs: { astra: { token: <...> as string, endpoint: <...> as string, }, },}; Instead of `endpoint:`, you many provide property `datacenterID:` and optionally `regionName:`. Usage[​](#usage "Direct link to Usage") --------------------------------------- import { CassandraKVStore } from "@langchain/community/storage/cassandra";import { AIMessage, HumanMessage } from "@langchain/core/messages";// This document is the Cassandra driver connection document; the example is to AstraDB but// any valid Cassandra connection can be used.const configConnection = { serviceProviderArgs: { astra: { token: "YOUR_TOKEN_OR_LOAD_FROM_ENV" as string, endpoint: "YOUR_ENDPOINT_OR_LOAD_FROM_ENV" as string, }, },};const store = new CassandraKVStore({ ...configConnection, keyspace: "test", // keyspace must exist table: "test_kv", // table will be created if it does not exist keyDelimiter: ":", // optional, default is "/"});// Define our encoder/decoder for converting between strings and Uint8Arraysconst encoder = new TextEncoder();const decoder = new TextDecoder();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [ `message:id:${index}`, encoder.encode(JSON.stringify(message)), ]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);// Make sure to decode the valuesconsole.log(retrievedMessages.map((v) => decoder.decode(v)));/**[ '{"id":["langchain","AIMessage"],"kwargs":{"content":"ai stuff..."}}', '{"id":["langchain","HumanMessage"],"kwargs":{"content":"human stuff..."}}'] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id:")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:2', 'message:id:1', 'message:id:3', 'message:id:0', 'message:id:4'] */// Finally, let's delete the keys from the storeawait store.mdelete(yieldedKeys); #### API Reference: * [CassandraKVStore](https://api.js.langchain.com/classes/langchain_community_storage_cassandra.CassandraKVStore.html) from `@langchain/community/storage/cassandra` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous Stores ](/v0.1/docs/integrations/stores/)[ Next File System Store ](/v0.1/docs/integrations/stores/file_system/) * [Setup](#setup) * [Apache Cassandra®](#apache-cassandra) * [Astra DB](#astra-db) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/file_system/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * File System Store On this page File System Store ================= Compatibility Only available on Node.js. This example demonstrates how to setup chat history storage using the `LocalFileStore` KV store integration. Usage[​](#usage "Direct link to Usage") --------------------------------------- info The path passed to the `.fromPath` must be a directory, not a file. The `LocalFileStore` is a wrapper around the `fs` module for storing data as key-value pairs. Each key value pair has its own file nested inside the directory passed to the `.fromPath` method. The file name is the key and inside contains the value of the key. import fs from "fs";import { LocalFileStore } from "langchain/storage/file_system";import { AIMessage, HumanMessage } from "@langchain/core/messages";// Instantiate the store using the `fromPath` method.const store = await LocalFileStore.fromPath("./messages");// Define our encoder/decoder for converting between strings and Uint8Arraysconst encoder = new TextEncoder();const decoder = new TextDecoder();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [ `message:id:${index}`, encoder.encode(JSON.stringify(message)), ]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);// Make sure to decode the valuesconsole.log(retrievedMessages.map((v) => decoder.decode(v)));/**[ '{"id":["langchain","AIMessage"],"kwargs":{"content":"ai stuff..."}}', '{"id":["langchain","HumanMessage"],"kwargs":{"content":"human stuff..."}}'] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id:")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:2', 'message:id:1', 'message:id:3', 'message:id:0', 'message:id:4'] */// Finally, let's delete the keys from the store// and delete the file.await store.mdelete(yieldedKeys);await fs.promises.rm("./messages", { recursive: true, force: true }); #### API Reference: * [LocalFileStore](https://api.js.langchain.com/classes/langchain_storage_file_system.LocalFileStore.html) from `langchain/storage/file_system` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous Cassandra KV ](/v0.1/docs/integrations/stores/cassandra_storage/)[ Next In Memory Store ](/v0.1/docs/integrations/stores/in_memory/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/in_memory/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * In Memory Store On this page In Memory Store =============== This example demonstrates how to setup chat history storage using the `InMemoryStore` KV store integration. Usage[​](#usage "Direct link to Usage") --------------------------------------- The `InMemoryStore` allows for a generic type to be assigned to the values in the store. We'll assign type `BaseMessage` as the type of our values, keeping with the theme of a chat history store. import { InMemoryStore } from "langchain/storage/in_memory";import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";// Instantiate the store using the `fromPath` method.const store = new InMemoryStore<BaseMessage>();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [`message:id:${index}`, message]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);console.log(retrievedMessages.map((v) => v));/**[ AIMessage { lc_kwargs: { content: 'ai stuff...', additional_kwargs: {} }, content: 'ai stuff...', ... }, HumanMessage { lc_kwargs: { content: 'human stuff...', additional_kwargs: {} }, content: 'human stuff...', ... }] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id:")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:0', 'message:id:1', 'message:id:2', 'message:id:3', 'message:id:4'] */// Finally, let's delete the keys from the storeawait store.mdelete(yieldedKeys); #### API Reference: * [InMemoryStore](https://api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `langchain/storage/in_memory` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [BaseMessage](https://api.js.langchain.com/classes/langchain_core_messages.BaseMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous File System Store ](/v0.1/docs/integrations/stores/file_system/)[ Next Stores ](/v0.1/docs/integrations/stores/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/ioredis_storage/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * IORedis On this page IORedis ======= This example demonstrates how to setup chat history storage using the `RedisByteStore` `BaseStore` integration. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install ioredis yarn add ioredis pnpm add ioredis Usage[​](#usage "Direct link to Usage") --------------------------------------- import { Redis } from "ioredis";import { RedisByteStore } from "@langchain/community/storage/ioredis";import { AIMessage, HumanMessage } from "@langchain/core/messages";// Define the client and storeconst client = new Redis({});const store = new RedisByteStore({ client,});// Define our encoder/decoder for converting between strings and Uint8Arraysconst encoder = new TextEncoder();const decoder = new TextDecoder();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [ `message:id:${index}`, encoder.encode(JSON.stringify(message)), ]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);// Make sure to decode the valuesconsole.log(retrievedMessages.map((v) => decoder.decode(v)));/**[ '{"id":["langchain","AIMessage"],"kwargs":{"content":"ai stuff..."}}', '{"id":["langchain","HumanMessage"],"kwargs":{"content":"human stuff..."}}'] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id:")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:2', 'message:id:1', 'message:id:3', 'message:id:0', 'message:id:4'] */// Finally, let's delete the keys from the store// and close the Redis connection.await store.mdelete(yieldedKeys);client.disconnect(); #### API Reference: * [RedisByteStore](https://api.js.langchain.com/classes/langchain_community_storage_ioredis.RedisByteStore.html) from `@langchain/community/storage/ioredis` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous Stores ](/v0.1/docs/integrations/stores/)[ Next Upstash Redis ](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/upstash_redis_storage/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * Upstash Redis On this page Upstash Redis ============= This example demonstrates how to setup chat history storage using the `UpstashRedisStore` `BaseStore` integration. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install @upstash/redis yarn add @upstash/redis pnpm add @upstash/redis Usage[​](#usage "Direct link to Usage") --------------------------------------- import { Redis } from "@upstash/redis";import { UpstashRedisStore } from "@langchain/community/storage/upstash_redis";import { AIMessage, HumanMessage } from "@langchain/core/messages";// Pro tip: define a helper function for getting your client// along with handling the case where your environment variables// are not set.const getClient = () => { if ( !process.env.UPSTASH_REDIS_REST_URL || !process.env.UPSTASH_REDIS_REST_TOKEN ) { throw new Error( "UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN must be set in the environment" ); } const client = new Redis({ url: process.env.UPSTASH_REDIS_REST_URL, token: process.env.UPSTASH_REDIS_REST_TOKEN, }); return client;};// Define the client and storeconst client = getClient();const store = new UpstashRedisStore({ client,});// Define our encoder/decoder for converting between strings and Uint8Arraysconst encoder = new TextEncoder();const decoder = new TextDecoder();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [ `message:id:${index}`, encoder.encode(JSON.stringify(message)), ]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);// Make sure to decode the valuesconsole.log(retrievedMessages.map((v) => decoder.decode(v)));/**[ '{"id":["langchain","AIMessage"],"kwargs":{"content":"ai stuff..."}}', '{"id":["langchain","HumanMessage"],"kwargs":{"content":"human stuff..."}}'] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:2', 'message:id:1', 'message:id:3', 'message:id:0', 'message:id:4'] */// Finally, let's delete the keys from the storeawait store.mdelete(yieldedKeys); #### API Reference: * [UpstashRedisStore](https://api.js.langchain.com/classes/langchain_community_storage_upstash_redis.UpstashRedisStore.html) from `@langchain/community/storage/upstash_redis` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous IORedis ](/v0.1/docs/integrations/stores/ioredis_storage/)[ Next Vercel KV ](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/stores/vercel_kv_storage/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [Cassandra KV](/v0.1/docs/integrations/stores/cassandra_storage/) * [File System Store](/v0.1/docs/integrations/stores/file_system/) * [In Memory Store](/v0.1/docs/integrations/stores/in_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [IORedis](/v0.1/docs/integrations/stores/ioredis_storage/) * [Upstash Redis](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Vercel KV](/v0.1/docs/integrations/stores/vercel_kv_storage/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Stores](/v0.1/docs/integrations/stores/) * Vercel KV On this page Vercel KV ========= This example demonstrates how to setup chat history storage using the `VercelKVStore` `BaseStore` integration. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install @vercel/kv yarn add @vercel/kv pnpm add @vercel/kv Usage[​](#usage "Direct link to Usage") --------------------------------------- import { createClient } from "@vercel/kv";import { VercelKVStore } from "@langchain/community/storage/vercel_kv";import { AIMessage, HumanMessage } from "@langchain/core/messages";// Pro tip: define a helper function for getting your client// along with handling the case where your environment variables// are not set.const getClient = () => { if (!process.env.VERCEL_KV_API_URL || !process.env.VERCEL_KV_API_TOKEN) { throw new Error( "VERCEL_KV_API_URL and VERCEL_KV_API_TOKEN must be set in the environment" ); } const client = createClient({ url: process.env.VERCEL_KV_API_URL, token: process.env.VERCEL_KV_API_TOKEN, }); return client;};// Define the client and storeconst client = getClient();const store = new VercelKVStore({ client,});// Define our encoder/decoder for converting between strings and Uint8Arraysconst encoder = new TextEncoder();const decoder = new TextDecoder();/** * Here you would define your LLM and chat chain, call * the LLM and eventually get a list of messages. * For this example, we'll assume we already have a list. */const messages = Array.from({ length: 5 }).map((_, index) => { if (index % 2 === 0) { return new AIMessage("ai stuff..."); } return new HumanMessage("human stuff...");});// Set your messages in the store// The key will be prefixed with `message:id:` and end// with the index.await store.mset( messages.map((message, index) => [ `message:id:${index}`, encoder.encode(JSON.stringify(message)), ]));// Now you can get your messages from the storeconst retrievedMessages = await store.mget(["message:id:0", "message:id:1"]);// Make sure to decode the valuesconsole.log(retrievedMessages.map((v) => decoder.decode(v)));/**[ '{"id":["langchain","AIMessage"],"kwargs":{"content":"ai stuff..."}}', '{"id":["langchain","HumanMessage"],"kwargs":{"content":"human stuff..."}}'] */// Or, if you want to get back all the keys you can call// the `yieldKeys` method.// Optionally, you can pass a key prefix to only get back// keys which match that prefix.const yieldedKeys = [];for await (const key of store.yieldKeys("message:id:")) { yieldedKeys.push(key);}// The keys are not encoded, so no decoding is necessaryconsole.log(yieldedKeys);/**[ 'message:id:2', 'message:id:1', 'message:id:3', 'message:id:0', 'message:id:4'] */// Finally, let's delete the keys from the storeawait store.mdelete(yieldedKeys); #### API Reference: * [VercelKVStore](https://api.js.langchain.com/classes/langchain_community_storage_vercel_kv.VercelKVStore.html) from `@langchain/community/storage/vercel_kv` * [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages` * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * * * #### Help us out by providing feedback on this documentation page: [ Previous Upstash Redis ](/v0.1/docs/integrations/stores/upstash_redis_storage/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/memory/chat_messages/custom/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Memory](/v0.1/docs/modules/memory/) * [\[Beta\] Memory](/v0.1/docs/modules/memory/) * [Chat Message History](/v0.1/docs/modules/memory/chat_messages/) * [Custom chat history](/v0.1/docs/modules/memory/chat_messages/custom/) * [Memory types](/v0.1/docs/modules/memory/types/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [Experimental](/v0.1/docs/modules/experimental/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * More * [Memory](/v0.1/docs/modules/memory/) * [Chat Message History](/v0.1/docs/modules/memory/chat_messages/) * Custom chat history Custom chat history =================== To create your own custom chat history class for a backing store, you can extend the [`BaseListChatMessageHistory`](https://api.js.langchain.com/classes/langchain_core_chat_history.BaseListChatMessageHistory.html) class. This requires you to implement the following methods: * `addMessage`, which adds a `BaseMessage` to the store for the current session. This usually involves serializing them into a simple object representation (defined as `StoredMessage` below) that the backing store can handle. * `getMessages`, which loads messages for a session and returns them as an array of `BaseMessage`s. For most databases, this involves deserializing stored messages into `BaseMessage`s. In addition, there are some optional methods that are nice to override: * `clear`, which removes all messages from the store. * `addMessages`, which will add multiple messages at a time to the current session. This can save round-trips to and from the backing store if many messages are being saved at once. The default implementation will call `addMessage` once per input message. Here’s an example that stores messages in-memory. For long-term persistence, you should use a real database. You’ll notice we use the `mapChatMessagesToStoredMessages` and `mapStoredMessagesToChatMessages` helper methods for consistent serialization and deserialization: import { BaseListChatMessageHistory } from "@langchain/core/chat_history";import { BaseMessage, StoredMessage, mapChatMessagesToStoredMessages, mapStoredMessagesToChatMessages,} from "@langchain/core/messages";// Not required, but usually chat message histories will handle multiple sessions// for different users, and should take some kind of sessionId as input.export interface CustomChatMessageHistoryInput { sessionId: string;}export class CustomChatMessageHistory extends BaseListChatMessageHistory { lc_namespace = ["langchain", "stores", "message"]; sessionId: string; // Simulate a real database layer. Stores serialized objects. fakeDatabase: Record<string, StoredMessage[]> = {}; constructor(fields: CustomChatMessageHistoryInput) { super(fields); this.sessionId = fields.sessionId; } async getMessages(): Promise<BaseMessage[]> { const messages = this.fakeDatabase[this.sessionId] ?? []; return mapStoredMessagesToChatMessages(messages); } async addMessage(message: BaseMessage): Promise<void> { if (this.fakeDatabase[this.sessionId] === undefined) { this.fakeDatabase[this.sessionId] = []; } const serializedMessages = mapChatMessagesToStoredMessages([message]); this.fakeDatabase[this.sessionId].push(serializedMessages[0]); } async addMessages(messages: BaseMessage[]): Promise<void> { if (this.fakeDatabase[this.sessionId] === undefined) { this.fakeDatabase[this.sessionId] = []; } const existingMessages = this.fakeDatabase[this.sessionId]; const serializedMessages = mapChatMessagesToStoredMessages(messages); this.fakeDatabase[this.sessionId] = existingMessages.concat(serializedMessages); } async clear(): Promise<void> { delete this.fakeDatabase[this.sessionId]; }} You can then use this chat history as usual: import { AIMessage, HumanMessage } from "@langchain/core/messages";const chatHistory = new CustomChatMessageHistory({ sessionId: "test" });await chatHistory.addMessages([ new HumanMessage("Hello there!"), new AIMessage("Hello to you too!"),]);await chatHistory.getMessages(); [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Hello there!", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello there!", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello to you too!", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello to you too!", name: undefined, additional_kwargs: {} }] * * * #### Help us out by providing feedback on this documentation page: [ Previous Chat Message History ](/v0.1/docs/modules/memory/chat_messages/)[ Next Memory types ](/v0.1/docs/modules/memory/types/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/guides/evaluation/examples/comparisons/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Debugging](/v0.1/docs/guides/debugging/) * [Deployment](/v0.1/docs/guides/deployment/) * [Evaluation](/v0.1/docs/guides/evaluation/) * [String Evaluators](/v0.1/docs/guides/evaluation/string/) * [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/) * [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/) * [Examples](/v0.1/docs/guides/evaluation/examples/) * [Comparing Chain Outputs](/v0.1/docs/guides/evaluation/examples/comparisons/) * [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/) * [Fallbacks](/v0.1/docs/guides/fallbacks/) * [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/) * [Migrating to 0.1](/v0.1/docs/guides/migrating/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Guides](/v0.1/docs/guides/) * [Evaluation](/v0.1/docs/guides/evaluation/) * [Examples](/v0.1/docs/guides/evaluation/examples/) * Comparing Chain Outputs Comparing Chain Outputs ======================= Suppose you have two different prompts (or LLMs). How do you know which will generate "better" results? One automated way to predict the preferred configuration is to use a `PairwiseStringEvaluator` like the `PairwiseStringEvalChain`[\[1\]](#cite_note-1). This chain prompts an LLM to select which output is preferred, given a specific input. For this evaluation, we will need 3 things: 1. An evaluator 2. A dataset of inputs 3. 2 (or more) LLMs, Chains, or Agents to compare Then we will aggregate the restults to determine the preferred model. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { loadEvaluator } from "langchain/evaluation";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "@langchain/openai";import { ChainValues } from "@langchain/core/utils/types";import { SerpAPI } from "@langchain/community/tools/serpapi";// Step 1. Create the Evaluator// In this example, you will use gpt-4 to select which output is preferred.const evalChain = await loadEvaluator("pairwise_string");// Step 2. Select Dataset// If you already have real usage data for your LLM, you can use a representative sample. More examples// provide more reliable results. We will use some example queries someone might have about how to use langchain here.const dataset = [ "Can I use LangChain to automatically rate limit or retry failed API calls?", "How can I ensure the accuracy and reliability of the travel data with LangChain?", "How can I track student progress with LangChain?", "langchain how to handle different document formats?", // "Can I chain API calls to different services in LangChain?", // "How do I handle API errors in my langchain app?", // "How do I handle different currency and tax calculations with LangChain?", // "How do I extract specific data from the document using langchain tools?", // "Can I use LangChain to handle real-time data from these APIs?", // "Can I use LangChain to track and manage travel alerts and updates?", // "Can I use LangChain to create and grade quizzes from these APIs?", // "Can I use LangChain to automate data cleaning and preprocessing for the AI plugins?", // "How can I ensure the accuracy and reliability of the financial data with LangChain?", // "Can I integrate medical imaging tools with LangChain?", // "How do I ensure the privacy and security of the patient data with LangChain?", // "How do I handle authentication for APIs in LangChain?", // "Can I use LangChain to recommend personalized study materials?", // "How do I connect to the arXiv API using LangChain?", // "How can I use LangChain to interact with educational APIs?", // "langchain how to sort retriever results - relevance or date?", // "Can I integrate a recommendation engine with LangChain to suggest products?"];// Step 3. Define Models to Compare// We will be comparing two agents in this case.const model = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo-16k-0613",});const serpAPI = new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us",});serpAPI.description = "Useful when you need to answer questions about current events. You should ask targeted questions.";const tools = [serpAPI];const conversationAgent = await initializeAgentExecutorWithOptions( tools, model, { agentType: "chat-zero-shot-react-description", });const functionsAgent = await initializeAgentExecutorWithOptions(tools, model, { agentType: "openai-functions",});// Step 4. Generate Responses// We will generate outputs for each of the models before evaluating them.const results = [];const agents = [functionsAgent, conversationAgent];const concurrencyLevel = 4; // How many concurrent agents to run. May need to decrease if OpenAI is rate limiting.// We will only run the first 20 examples of this dataset to speed things up// This will lead to larger confidence intervals downstream.const batch = [];for (const example of dataset) { batch.push( Promise.all(agents.map((agent) => agent.invoke({ input: example }))) ); if (batch.length >= concurrencyLevel) { const batchResults = await Promise.all(batch); results.push(...batchResults); batch.length = 0; }}if (batch.length) { const batchResults = await Promise.all(batch); results.push(...batchResults);}console.log(JSON.stringify(results));// Step 5. Evaluate Pairs// Now it's time to evaluate the results. For each agent response, run the evaluation chain to select which output is preferred (or return a tie).// Randomly select the input order to reduce the likelihood that one model will be preferred just because it is presented first.const preferences = await predictPreferences(dataset, results);// Print out the ratio of preferences.const nameMap: { [key: string]: string } = { a: "OpenAI Functions Agent", b: "Structured Chat Agent",};const counts = counter(preferences);const prefRatios: { [key: string]: number } = {};for (const k of Object.keys(counts)) { prefRatios[k] = counts[k] / preferences.length;}for (const k of Object.keys(prefRatios)) { console.log(`${nameMap[k]}: ${(prefRatios[k] * 100).toFixed(2)}%`);}/*OpenAI Functions Agent: 100.00% */// Estimate Confidence Intervals// The results seem pretty clear, but if you want to have a better sense of how confident we are, that model "A" (the OpenAI Functions Agent) is the preferred model, we can calculate confidence intervals.// Below, use the Wilson score to estimate the confidence interval.for (const [which_, name] of Object.entries(nameMap)) { const [low, high] = wilsonScoreInterval(preferences, which_); console.log( `The "${name}" would be preferred between ${(low * 100).toFixed(2)}% and ${( high * 100 ).toFixed(2)}% percent of the time (with 95% confidence).` );}/*The "OpenAI Functions Agent" would be preferred between 51.01% and 100.00% percent of the time (with 95% confidence).The "Structured Chat Agent" would be preferred between 0.00% and 48.99% percent of the time (with 95% confidence). */function counter(arr: string[]): { [key: string]: number } { return arr.reduce( (countMap: { [key: string]: number }, word: string) => ({ ...countMap, [word]: (countMap[word] || 0) + 1, }), {} );}async function predictPreferences(dataset: string[], results: ChainValues[][]) { const preferences: string[] = []; for (let i = 0; i < dataset.length; i += 1) { const input = dataset[i]; const resA = results[i][0]; const resB = results[i][1]; // Flip a coin to reduce persistent position bias let a; let b; let predA; let predB; if (Math.random() < 0.5) { predA = resA; predB = resB; a = "a"; b = "b"; } else { predA = resB; predB = resA; a = "b"; b = "a"; } const evalRes = await evalChain.evaluateStringPairs({ input, prediction: predA.output || predA.toString(), predictionB: predB.output || predB.toString(), }); if (evalRes.value === "A") { preferences.push(a); } else if (evalRes.value === "B") { preferences.push(b); } else { preferences.push("None"); // No preference } } return preferences;}function wilsonScoreInterval( preferences: string[], which = "a", z = 1.96): [number, number] { const totalPreferences = preferences.filter( (p) => p === "a" || p === "b" ).length; const ns = preferences.filter((p) => p === which).length; if (totalPreferences === 0) { return [0, 0]; } const pHat = ns / totalPreferences; const denominator = 1 + z ** 2 / totalPreferences; const adjustment = (z / denominator) * Math.sqrt( (pHat * (1 - pHat)) / totalPreferences + z ** 2 / (4 * totalPreferences ** 2) ); const center = (pHat + z ** 2 / (2 * totalPreferences)) / denominator; const lowerBound = Math.min(Math.max(center - adjustment, 0.0), 1.0); const upperBound = Math.min(Math.max(center + adjustment, 0.0), 1.0); return [lowerBound, upperBound];} #### API Reference: * [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation` * [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [ChainValues](https://api.js.langchain.com/types/langchain_core_utils_types.ChainValues.html) from `@langchain/core/utils/types` * [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi` 1. Note: Automated evals are still an open research topic and are best used alongside other evaluation approaches. LLM preferences exhibit biases, including banal ones like the order of outputs. In choosing preferences, "ground truth" may not be taken into account, which may lead to scores that aren't grounded in utility.\_ * * * #### Help us out by providing feedback on this documentation page: [ Previous Examples ](/v0.1/docs/guides/evaluation/examples/)[ Next Extending LangChain.js ](/v0.1/docs/guides/extending_langchain/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Puppeteer Webpages, with Puppeteer ======================== Compatibility Only available on Node.js. This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage. Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render. If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the [CheerioWebBaseLoader](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) instead. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install puppeteer yarn add puppeteer pnpm add puppeteer Usage[​](#usage "Direct link to Usage") --------------------------------------- import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load(); Options[​](#options "Direct link to Options") --------------------------------------------- Here's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface: type PuppeteerWebBaseLoaderOptions = { launchOptions?: PuppeteerLaunchOptions; gotoOptions?: PuppeteerGotoOptions; evaluate?: (page: Page, browser: Browser) => Promise<string>;}; 1. `launchOptions`: an optional object that specifies additional options to pass to the puppeteer.launch() method. This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow. 2. `gotoOptions`: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful. 3. `evaluate`: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements. The function should return a Promise that resolves to a string containing the result of the evaluation. By passing these options to the `PuppeteerWebBaseLoader` constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages. Here is a basic example to do it: import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loaderWithOptions = new PuppeteerWebBaseLoader( "https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate , in this case you get page and browser instances */ async evaluate(page, browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); await browser.close(); return result; }, });const docsFromLoaderWithOptions = await loaderWithOptions.load();console.log({ docsFromLoaderWithOptions }); #### API Reference: * [PuppeteerWebBaseLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_puppeteer.PuppeteerWebBaseLoader.html) from `langchain/document_loaders/web/puppeteer` ### Screenshots[​](#screenshots "Direct link to Screenshots") To take a screenshot of a site, initialize the loader the same as above, and call the `.screenshot()` method. This will return an instance of `Document` where the page content is a base64 encoded image, and the metadata contains a `source` field with the URL of the page. import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loaderWithOptions = new PuppeteerWebBaseLoader("https://langchain.com", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", },});const screenshot = await loaderWithOptions.screenshot();console.log({ screenshot }); #### API Reference: * [PuppeteerWebBaseLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_puppeteer.PuppeteerWebBaseLoader.html) from `langchain/document_loaders/web/puppeteer` * * * #### Help us out by providing feedback on this documentation page: [ Previous Cheerio ](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)[ Next Playwright ](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Playwright Webpages, with Playwright ========================= Compatibility Only available on Node.js. This example goes over how to load data from webpages using Playwright. One document will be created for each webpage. Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render. If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the [`CheerioWebBaseLoader`](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) instead. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install playwright yarn add playwright pnpm add playwright Usage[​](#usage "Direct link to Usage") --------------------------------------- import { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load(); Options[​](#options "Direct link to Options") --------------------------------------------- Here's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface: type PlaywrightWebBaseLoaderOptions = { launchOptions?: LaunchOptions; gotoOptions?: PlaywrightGotoOptions; evaluate?: PlaywrightEvaluate;}; 1. `launchOptions`: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode. 2. `gotoOptions`: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful. 3. `evaluate`: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. The function should return a Promise that resolves to a string containing the result of the evaluation. By passing these options to the `PlaywrightWebBaseLoader` constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages. Here is a basic example to do it: import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent; And a more advanced example: import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load(); * * * #### Help us out by providing feedback on this documentation page: [ Previous Puppeteer ](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)[ Next Apify Dataset ](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Apify Dataset Apify Dataset ============= This guide shows how to use [Apify](https://apify.com) with LangChain to load documents from an Apify Dataset. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ [Apify](https://apify.com) is a cloud platform for web scraping and data extraction, which provides an [ecosystem](https://apify.com/store) of more than a thousand ready-made apps called _Actors_ for various web scraping, crawling, and data extraction use cases. This guide shows how to load documents from an [Apify Dataset](https://docs.apify.com/platform/storage/dataset) — a scalable append-only storage built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are typically used to save results of Actors. For example, [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor deeply crawls websites such as documentation, knowledge bases, help centers, or blogs, and then stores the text content of webpages into a dataset, from which you can feed the documents into a vector index and answer questions from it. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll first need to install the official Apify client: * npm * Yarn * pnpm npm install apify-client yarn add apify-client pnpm add apify-client tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community You'll also need to sign up and retrieve your [Apify API token](https://console.apify.com/account/integrations). Usage[​](#usage "Direct link to Usage") --------------------------------------- ### From a New Dataset[​](#from-a-new-dataset "Direct link to From a New Dataset") If you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results. **Note:** Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites! Here's an example: import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new ChatOpenAI({ temperature: 0,});const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is LangChain?" });console.log(res.answer);console.log(res.context.map((doc) => doc.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/ #### API Reference: * [ApifyDatasetLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_apify_dataset.ApifyDatasetLoader.html) from `langchain/document_loaders/web/apify_dataset` * [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` From an Existing Dataset[​](#from-an-existing-dataset "Direct link to From an Existing Dataset") ------------------------------------------------------------------------------------------------ If you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly: import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createRetrievalChain } from "langchain/chains/retrieval";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new ChatOpenAI({ temperature: 0,});const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is LangChain?" });console.log(res.answer);console.log(res.context.map((doc) => doc.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/ #### API Reference: * [ApifyDatasetLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_apify_dataset.ApifyDatasetLoader.html) from `langchain/document_loaders/web/apify_dataset` * [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * * * #### Help us out by providing feedback on this documentation page: [ Previous Playwright ](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)[ Next AssemblyAI Audio Transcript ](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Browserbase Loader On this page Browserbase Loader ================== Description[​](#description "Direct link to Description") --------------------------------------------------------- [Browserbase](https://browserbase.com) is a serverless platform for running headless browsers, it offers advanced debugging, session recordings, stealth mode, integrated proxies and captcha solving. Installation[​](#installation "Direct link to Installation") ------------------------------------------------------------ * Get an API key from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`). * Install the [Browserbase SDK](http://github.com/browserbase/js-sdk): * npm * Yarn * pnpm npm i @browserbasehq/sdk yarn add @browserbasehq/sdk pnpm add @browserbasehq/sdk Example[​](#example "Direct link to Example") --------------------------------------------- Utilize the `BrowserbaseLoader` as follows to allow your agent to load websites: import { BrowserbaseLoader } from "langchain/document_loaders/web/browserbase";const loader = new BrowserbaseLoader(["https://example.com"], { textContent: true,});const docs = await loader.load(); #### API Reference: * [BrowserbaseLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_browserbase.BrowserbaseLoader.html) from `langchain/document_loaders/web/browserbase` Arguments[​](#arguments "Direct link to Arguments") --------------------------------------------------- * `urls`: Required. List of URLs to load. Options[​](#options "Direct link to Options") --------------------------------------------- * `api_key`: Optional. Specifies Browserbase API key. Defaults is the `BROWSERBASE_API_KEY` environment variable. * `text_content`: Optional. Load pages as readable text. Default is `False`. * * * #### Help us out by providing feedback on this documentation page: [ Previous Azure Blob Storage File ](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)[ Next College Confidential ](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Description](#description) * [Installation](#installation) * [Example](#example) * [Arguments](#arguments) * [Options](#options) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * AssemblyAI Audio Transcript AssemblyAI Audio Transcript =========================== This covers how to load audio (and video) transcripts as document objects from a file using the [AssemblyAI API](https://www.assemblyai.com/docs/api-reference/transcript). Usage[​](#usage "Direct link to Usage") --------------------------------------- First, you'll need to install the official AssemblyAI package: * npm * Yarn * pnpm npm install assemblyai yarn add assemblyai pnpm add assemblyai To use the loaders you need an [AssemblyAI account](https://www.assemblyai.com/dashboard/signup) and [get your AssemblyAI API key from the dashboard](https://www.assemblyai.com/app/account). Then, configure the API key as the `ASSEMBLYAI_API_KEY` environment variable or the `apiKey` options parameter. import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity }); #### API Reference: * [AudioTranscriptLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_assemblyai.AudioTranscriptLoader.html) from `langchain/document_loaders/web/assemblyai` > **info** > > * You can use the `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` to split the transcript into paragraphs or sentences. > * The `audio` parameter can be a URL, a local file path, a buffer, or a stream. > * The `audio` can also be a video file. See the [list of supported file types in the FAQ doc](https://www.assemblyai.com/docs/concepts/faq#:~:text=file%20types%20are%20supported). > * If you don't pass in the `apiKey` option, the loader will use the `ASSEMBLYAI_API_KEY` environment variable. > * You can add more properties in addition to `audio`. Find the full list of request parameters in the [AssemblyAI API docs](https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript). You can also use the `AudioSubtitleLoader` to get `srt` or `vtt` subtitles as a document. import { AudioSubtitleLoader } from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript }, "srt", // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity }); #### API Reference: * [AudioSubtitleLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_assemblyai.AudioSubtitleLoader.html) from `langchain/document_loaders/web/assemblyai` * * * #### Help us out by providing feedback on this documentation page: [ Previous Apify Dataset ](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)[ Next Azure Blob Storage Container ](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * College Confidential College Confidential ==================== This example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install cheerio yarn add cheerio pnpm add cheerio Usage[​](#usage "Direct link to Usage") --------------------------------------- import { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load(); * * * #### Help us out by providing feedback on this documentation page: [ Previous Browserbase Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)[ Next Confluence ](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/confluence/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Confluence On this page Confluence ========== Compatibility Only available on Node.js. This covers how to load document objects from pages in a Confluence space. Credentials[​](#credentials "Direct link to Credentials") --------------------------------------------------------- * You'll need to set up an access token and provide it along with your confluence username in order to authenticate the request * You'll also need the `space key` for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. `https://example.atlassian.net/wiki/spaces/{SPACE_KEY}` * And you'll need to install `html-to-text` to parse the pages into plain text * npm * Yarn * pnpm npm install html-to-text yarn add html-to-text pnpm add html-to-text Usage[​](#usage "Direct link to Usage") --------------------------------------- import { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;const personalAccessToken = process.env.CONFLUENCE_PAT;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else if (personalAccessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", personalAccessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You need either a username and access token, or a personal access token (PAT), to use this example." );} #### API Reference: * [ConfluencePagesLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_confluence.ConfluencePagesLoader.html) from `langchain/document_loaders/web/confluence` * * * #### Help us out by providing feedback on this documentation page: [ Previous College Confidential ](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)[ Next Couchbase ](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Credentials](#credentials) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Couchbase Couchbase ========= [Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. This guide shows how to use load documents from couchbase database. Installation ============ * npm * Yarn * pnpm npm install couchbase yarn add couchbase pnpm add couchbase Usage[​](#usage "Direct link to Usage") --------------------------------------- ### Querying for Documents from Couchbase[​](#querying-for-documents-from-couchbase "Direct link to Querying for Documents from Couchbase") For more details on connecting to a Couchbase cluster, please check the [Node.js SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/howtos/managing-connections.html#connection-strings). For help with querying for documents using SQL++ (SQL for JSON), please check the [documentation](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html). import { CouchbaseDocumentLoader } from "langchain/document_loaders/web/couchbase";import { Cluster } from "couchbase";const connectionString = "couchbase://localhost"; // valid couchbase connection stringconst dbUsername = "Administrator"; // valid database user with read access to the bucket being queriedconst dbPassword = "Password"; // password for the database user// query is a valid SQL++ queryconst query = ` SELECT h.* FROM \`travel-sample\`.inventory.hotel h WHERE h.country = 'United States' LIMIT 1`; ### Connect to Couchbase Cluster[​](#connect-to-couchbase-cluster "Direct link to Connect to Couchbase Cluster") const couchbaseClient = await Cluster.connect(connectionString, { username: dbUsername, password: dbPassword, configProfile: "wanDevelopment",}); ### Create the Loader[​](#create-the-loader "Direct link to Create the Loader") const loader = new CouchbaseDocumentLoader( couchbaseClient, // The connected couchbase cluster client query // A valid SQL++ query which will return the required data); ### Load Documents[​](#load-documents "Direct link to Load Documents") You can fetch the documents by calling the `load` method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call `lazy_load` method that returns an Iterator. // using load methoddocs = await loader.load();console.log(docs); // using lazy_loadfor await (const doc of this.lazyLoad()) { console.log(doc); break; // break based on required condition} ### Specifying Fields with Content and Metadata[​](#specifying-fields-with-content-and-metadata "Direct link to Specifying Fields with Content and Metadata") The fields that are part of the Document content can be specified using the `pageContentFields` parameter. The metadata fields for the Document can be specified using the `metadataFields` parameter. const loaderWithSelectedFields = new CouchbaseDocumentLoader( couchbaseClient, query, // pageContentFields [ "address", "name", "city", "phone", "country", "geo", "description", "reviews", ], ["id"] // metadataFields);const filtered_docs = await loaderWithSelectedFields.load();console.log(filtered_docs); * * * #### Help us out by providing feedback on this documentation page: [ Previous Confluence ](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)[ Next Figma ](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/figma/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Figma Figma ===== This example goes over how to load data from a Figma file. You will need a Figma access token in order to get started. import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs }); #### API Reference: * [FigmaFileLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_figma.FigmaFileLoader.html) from `langchain/document_loaders/web/figma` You can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL: https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0 * * * #### Help us out by providing feedback on this documentation page: [ Previous Couchbase ](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)[ Next Firecrawl ](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Firecrawl Firecrawl ========= This guide shows how to use [Firecrawl](https://firecrawl.dev) with LangChain to load web data into an LLM-ready format using Firecrawl. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ [FireCrawl](https://firecrawl.dev) crawls and convert any website into LLM-ready data. It crawls all accessible subpages and give you clean markdown and metadata for each. No sitemap required. FireCrawl handles complex tasks such as reverse proxies, caching, rate limits, and content blocked by JavaScript. Built by the [mendable.ai](https://mendable.ai) team. This guide shows how to scrap and crawl entire websites and load them using the `FireCrawlLoader` in LangChain. Setup[​](#setup "Direct link to Setup") --------------------------------------- Sign up and get your free [FireCrawl API key](https://firecrawl.dev) to start. FireCrawl offers 300 free credits to get you started, and it's [open-source](https://github.com/mendableai/firecrawl) in case you want to self-host. Usage[​](#usage "Direct link to Usage") --------------------------------------- Here's an example of how to use the `FireCrawlLoader` to load web search results: Firecrawl offers 2 modes: `scrape` and `crawl`. In `scrape` mode, Firecrawl will only scrape the page you provide. In `crawl` mode, Firecrawl will crawl the entire website. * npm * Yarn * pnpm npm install @mendable/firecrawl-js yarn add @mendable/firecrawl-js pnpm add @mendable/firecrawl-js import { FireCrawlLoader } from "langchain/document_loaders/web/firecrawl";const loader = new FireCrawlLoader({ url: "https://firecrawl.dev", // The URL to scrape apiKey: process.env.FIRECRAWL_API_KEY, // Optional, defaults to `FIRECRAWL_API_KEY` in your env. mode: "scrape", // The mode to run the crawler in. Can be "scrape" for single urls or "crawl" for all accessible subpages params: { // optional parameters based on Firecrawl API docs // For API documentation, visit https://docs.firecrawl.dev },});const docs = await loader.load(); #### API Reference: * [FireCrawlLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_firecrawl.FireCrawlLoader.html) from `langchain/document_loaders/web/firecrawl` ### Additional Parameters[​](#additional-parameters "Direct link to Additional Parameters") For `params` you can pass any of the params according to the [Firecrawl documentation](https://docs.firecrawl.dev). * * * #### Help us out by providing feedback on this documentation page: [ Previous Figma ](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)[ Next GitBook ](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * GitBook GitBook ======= This example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install cheerio yarn add cheerio pnpm add cheerio Load from single GitBook page[​](#load-from-single-gitbook-page "Direct link to Load from single GitBook page") --------------------------------------------------------------------------------------------------------------- import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load(); Load from all paths in a given GitBook[​](#load-from-all-paths-in-a-given-gitbook "Direct link to Load from all paths in a given GitBook") ------------------------------------------------------------------------------------------------------------------------------------------ For this to work, the GitbookLoader needs to be initialized with the root path ([https://docs.gitbook.com](https://docs.gitbook.com) in this example) and have `shouldLoadAllPaths` set to `true`. import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load(); * * * #### Help us out by providing feedback on this documentation page: [ Previous Firecrawl ](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)[ Next GitHub ](/v0.1/docs/integrations/document_loaders/web_loaders/github/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/hn/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Hacker News Hacker News =========== This example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install cheerio yarn add cheerio pnpm add cheerio Usage[​](#usage "Direct link to Usage") --------------------------------------- import { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load(); * * * #### Help us out by providing feedback on this documentation page: [ Previous GitHub ](/v0.1/docs/integrations/document_loaders/web_loaders/github/)[ Next IMSDB ](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/github/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * GitHub GitHub ====== This example goes over how to load data from a GitHub repository. You can set the `GITHUB_ACCESS_TOKEN` environment variable to a GitHub access token to increase the rate limit and access private repositories. Setup[​](#setup "Direct link to Setup") --------------------------------------- The GitHub loader requires the [ignore npm package](https://www.npmjs.com/package/ignore) as a peer dependency. Install it like this: * npm * Yarn * pnpm npm install ignore yarn add ignore pnpm add ignore Usage[​](#usage "Direct link to Usage") --------------------------------------- import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });}; #### API Reference: * [GithubRepoLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github` The loader will ignore binary files like images. ### Using .gitignore Syntax[​](#using-gitignore-syntax "Direct link to Using .gitignore Syntax") To ignore specific files, you can pass in an `ignorePaths` array into the constructor: import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files}; #### API Reference: * [GithubRepoLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github` ### Using a Different GitHub Instance[​](#using-a-different-github-instance "Direct link to Using a Different GitHub Instance") You may want to target a different GitHub instance than `github.com`, e.g. if you have a GitHub Enterprise instance for your company. For this you need two additional parameters: * `baseUrl` - the base URL of your GitHub instance, so the githubUrl matches `<baseUrl>/<owner>/<repo>/...` * `apiUrl` - the URL of the API endpoint of your GitHub instance import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.your.company/org/repo-name", { baseUrl: "https://github.your.company", apiUrl: "https://github.your.company/api/v3", accessToken: "ghp_A1B2C3D4E5F6a7b8c9d0", branch: "main", recursive: true, unknown: "warn", } ); const docs = await loader.load(); console.log({ docs });}; #### API Reference: * [GithubRepoLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github` ### Dealing with Submodules[​](#dealing-with-submodules "Direct link to Dealing with Submodules") In case your repository has submodules, you have to decide if the loader should follow them or not. You can control this with the boolean `processSubmodules` parameter. By default, submodules are not processed. Note that processing submodules works only in conjunction with setting the `recursive` parameter to true. import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: true, processSubmodules: true, unknown: "warn", } ); const docs = await loader.load(); console.log({ docs });}; #### API Reference: * [GithubRepoLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github` Note, that the loader will not follow submodules which are located on another GitHub instance than the one of the current repository. ### Stream large repository[​](#stream-large-repository "Direct link to Stream large repository") For situations where processing large repositories in a memory-efficient manner is required. You can use the `loadAsStream` method to asynchronously streams documents from the entire GitHub repository. import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 3, // Defaults to 2 } ); const docs = []; for await (const doc of loader.loadAsStream()) { docs.push(doc); } console.log({ docs });}; #### API Reference: * [GithubRepoLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github` * * * #### Help us out by providing feedback on this documentation page: [ Previous GitBook ](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)[ Next Hacker News ](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Notion API Notion API ========== This guide will take you through the steps required to load documents from Notion pages and databases using the Notion API. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ Notion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface. This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects. Setup[​](#setup "Direct link to Setup") --------------------------------------- 1. You will first need to install the official Notion client and the [notion-to-md](https://www.npmjs.com/package/notion-to-md) package as peer dependencies: * npm * Yarn * pnpm npm install @notionhq/client notion-to-md yarn add @notionhq/client notion-to-md pnpm add @notionhq/client notion-to-md 2. Create a [Notion integration](https://www.notion.so/my-integrations) and securely record the Internal Integration Secret (also known as `NOTION_INTEGRATION_TOKEN`). 3. Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to `Add connections` and select your new integration. 4. Get the `PAGE_ID` or `DATABASE_ID` for the page or database you want to load. > The 32 char hex in the url path represents the `ID`. For example: > PAGE\_ID: [https://www.notion.so/skarard/LangChain-Notion-API-`b34ca03f219c4420a6046fc4bdfdf7b4`](https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4) > DATABASE\_ID: [https://www.notion.so/skarard/`c393f19c3903440da0d34bf9c6c12ff2`?v=9c70a0f4e174498aa0f9021e0a9d52de](https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52de) > REGEX: `/(?<!=)[0-9a-f]{32}/` Example Usage[​](#example-usage "Direct link to Example Usage") --------------------------------------------------------------- import { NotionAPILoader } from "langchain/document_loaders/web/notionapi";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit();console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database", onDocumentLoaded: (current, total, currentTitle) => { console.log(`Loaded Page: ${currentTitle} (${current}/${total})`); }, callerOptions: { maxConcurrency: 64, // Default value }, propertiesAsHeader: true, // Prepends a front matter header of the page properties to the page contents});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs }); #### API Reference: * [NotionAPILoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_notionapi.NotionAPILoader.html) from `langchain/document_loaders/web/notionapi` * * * #### Help us out by providing feedback on this documentation page: [ Previous IMSDB ](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)[ Next PDF files ](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Recursive URL Loader Recursive URL Loader ==================== When loading content from a website, we may want to process load all URLs on a page. For example, let's look at the [LangChain.js introduction](https://js.langchain.com/docs/get_started/introduction) docs. This has many interesting child pages that we may want to load, split, and later retrieve in bulk. The challenge is traversing the tree of child pages and assembling a list! We do this using the RecursiveUrlLoader. This also gives us the flexibility to exclude some children, customize the extractor, and more. Setup[​](#setup "Direct link to Setup") --------------------------------------- To get started, you'll need to install the [`jsdom`](https://www.npmjs.com/package/jsdom) package: * npm * Yarn * pnpm npm i jsdom yarn add jsdom pnpm add jsdom We also suggest adding a package like [`html-to-text`](https://www.npmjs.com/package/html-to-text) or [`@mozilla/readability`](https://www.npmjs.com/package/@mozilla/readability) for extracting the raw text from the page. * npm * Yarn * pnpm npm i html-to-text yarn add html-to-text pnpm add html-to-text Usage[​](#usage "Direct link to Usage") --------------------------------------- import { compile } from "html-to-text";import { RecursiveUrlLoader } from "langchain/document_loaders/web/recursive_url";const url = "https://js.langchain.com/docs/get_started/introduction";const compiledConvert = compile({ wordwrap: 130 }); // returns (text: string) => string;const loader = new RecursiveUrlLoader(url, { extractor: compiledConvert, maxDepth: 1, excludeDirs: ["https://js.langchain.com/docs/api/"],});const docs = await loader.load(); Options[​](#options "Direct link to Options") --------------------------------------------- interface Options { excludeDirs?: string[]; // webpage directories to exclude. extractor?: (text: string) => string; // a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like html-to-text to extract the text. By default, it just returns the page as it is. maxDepth?: number; // the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job. timeout?: number; // the timeout for each request, in the unit of seconds. By default, it is set to 10000 (10 seconds). preventOutside?: boolean; // whether to prevent crawling outside the root url. By default, it is set to true. callerOptions?: AsyncCallerConstructorParams; // the options to call the AsyncCaller for example setting max concurrency (default is 64)} However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough. * * * #### Help us out by providing feedback on this documentation page: [ Previous PDF files ](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)[ Next S3 File ](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * SearchApi Loader SearchApi Loader ================ This guide shows how to use SearchApi with LangChain to load web search results. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ [SearchApi](https://www.searchapi.io/) is a real-time API that grants developers access to results from a variety of search engines, including engines like [Google Search](https://www.searchapi.io/docs/google), [Google News](https://www.searchapi.io/docs/google-news), [Google Scholar](https://www.searchapi.io/docs/google-scholar), [YouTube Transcripts](https://www.searchapi.io/docs/youtube-transcripts) or any other engine that could be found in documentation. This API enables developers and businesses to scrape and extract meaningful data directly from the result pages of all these search engines, providing valuable insights for different use-cases. This guide shows how to load web search results using the `SearchApiLoader` in LangChain. The `SearchApiLoader` simplifies the process of loading and processing web search results from SearchApi. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to sign up and retrieve your [SearchApi API key](https://www.searchapi.io/). Usage[​](#usage "Direct link to Usage") --------------------------------------- Here's an example of how to use the `SearchApiLoader`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { TokenTextSplitter } from "langchain/text_splitter";import { SearchApiLoader } from "langchain/document_loaders/web/searchapi";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";// Initialize the necessary componentsconst llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});const embeddings = new OpenAIEmbeddings();const apiKey = "Your SearchApi API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SearchApiLoader to load web search resultsconst loader = new SearchApiLoader({ q: query, apiKey, engine: "google" });const docs = await loader.load();const textSplitter = new TokenTextSplitter({ chunkSize: 800, chunkOverlap: 100,});const splitDocs = await textSplitter.splitDocuments(docs);// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments( splitDocs, embeddings);const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: question,});console.log(res.answer); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [TokenTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) from `langchain/text_splitter` * [SearchApiLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_searchapi.SearchApiLoader.html) from `langchain/document_loaders/web/searchapi` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` In this example, the `SearchApiLoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SearchApiLoader` can streamline the process of loading and processing web search results. * * * #### Help us out by providing feedback on this documentation page: [ Previous S3 File ](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)[ Next SerpAPI Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * SerpAPI Loader SerpAPI Loader ============== This guide shows how to use SerpAPI with LangChain to load web search results. Overview[​](#overview "Direct link to Overview") ------------------------------------------------ [SerpAPI](https://serpapi.com/) is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages. This guide shows how to load web search results using the `SerpAPILoader` in LangChain. The `SerpAPILoader` simplifies the process of loading and processing web search results from SerpAPI. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to sign up and retrieve your [SerpAPI API key](https://serpapi.com/dashboard). Usage[​](#usage "Direct link to Usage") --------------------------------------- Here's an example of how to use the `SerpAPILoader`: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";// Initialize the necessary componentsconst llm = new ChatOpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: question,});console.log(res.answer); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [SerpAPILoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_serpapi.SerpAPILoader.html) from `langchain/document_loaders/web/serpapi` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` * [createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents` * [createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval` In this example, the `SerpAPILoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SerpAPILoader` can streamline the process of loading and processing web search results. * * * #### Help us out by providing feedback on this documentation page: [ Previous SearchApi Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)[ Next Sitemap Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Sitemap Loader On this page Sitemap Loader ============== This notebook goes over how to use the [`SitemapLoader`](https://api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) class to load sitemaps into `Document`s. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, we need to install the `langchain` package: * npm * Yarn * pnpm npm install --save langchain yarn add langchain pnpm add langchain The URL passed in must either contain the `.xml` path to the sitemap, or a default `/sitemap.xml` will be appended to the URL. import { SitemapLoader } from "langchain/document_loaders/web/sitemap";const loader = new SitemapLoader("https://www.langchain.com/");const docs = await loader.load();console.log(docs.length);/**26 */console.log(docs[0]);/**Document { pageContent: '\n' + ' \n' + '\n' + ' \n' + ' \n' + ' Blog ArticleApr 8, 2022As the internet continues to develop and grow exponentially, jobs related to the industry do too, particularly those that relate to web design and development. The prediction is that by 2029, the job outlook for these two fields will grow by 8%—significantly faster than average. Whether you’re seeking salaried employment or aiming to work in a freelance capacity, a career in web design can offer a variety of employment arrangements, competitive salaries, and opportunities to utilize both technical and creative skill sets.What does a career in web design involve?A career in website design can involve the design, creation, and coding of a range of website types. Other tasks will typically include liaising with clients and discussing website specifications, incorporating feedback, working on graphic design and image editing, and enabling multimedia features such as audio and video. Requiring a range of creative and technical skills, web designers may be involved in work across a range of industries, including software companies, IT consultancies, web design companies, corporate organizations, and more. In contrast with web developers, web designers tend to play a more creative role, crafting the overall vision and design of a site, and determining how to best incorporate the necessary functionality. However, there can be significant overlap between the roles.Full-stack, back-end, and front-end web developmentThe U.S. Bureau of Labor Statistics (BLS) Occupational Outlook Handbook tends to group web developers and digital designers into one category. However, they define them separately, stating that web developers create and maintain websites and are responsible for the technical aspects including performance and capacity. Web or digital designers, on the other hand, are responsible for the look and functionality of websites and interfaces. They develop, create, and test the layout, functions, and navigation for usability. Web developers can focus on the back-end, front-end, or full-stack development, and typically utilize a range of programming languages, libraries, and frameworks to do so. Web designers may work more closely with front-end engineers to establish the user-end functionality and appearance of a site.Are web designers in demand in 2022?In our ever-increasingly digital environment, there is a constant need for websites—and therefore for web designers and developers. With 17.4 billion websites in existence as of January 2020, the demand for web developers is only expected to rise.Web designers with significant coding experience are typically in higher demand, and can usually expect a higher salary. Like all jobs, there are likely to be a range of opportunities, some of which are better paid than others. But certain skill sets are basic to web design, most of which are key to how to become a web designer in 2022.const removeHiddenBreakpointLayers = function ie(e){function t(){for(let{hash:r,mediaQuery:i}of e){if(!i)continue;if(window.matchMedia(i).matches)return r}return e[0]?.hash}let o=t();if(o)for(let r of document.querySelectorAll(".hidden-"+o))r.parentNode?.removeChild(r);for(let r of document.querySelectorAll(".ssr-variant")){for(;r.firstChild;)r.parentNode?.insertBefore(r.firstChild,r);r.parentNode?.removeChild(r)}for(let r of document.querySelectorAll("[data-framer-original-sizes]")){let i=r.getAttribute("data-framer-original-sizes");i===""?r.removeAttribute("sizes"):r.setAttribute("sizes",i),r.removeAttribute("data-framer-original-sizes")}};removeHiddenBreakpointLayers([{"hash":"1ksv3g6"}])\n' + '\n' + ' \n' + ' \n' + ' \n' + ' \n' + ' \n' + '\n' + '\n', metadata: { changefreq: '', lastmod: '', priority: '', source: 'https://www.langchain.com/blog-detail/starting-a-career-in-design' }} */ #### API Reference: * [SitemapLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) from `langchain/document_loaders/web/sitemap` Or, if you want to only load the sitemap and not the contents of each page from the sitemap, you can use the `parseSitemap` method: import { SitemapLoader } from "langchain/document_loaders/web/sitemap";const loader = new SitemapLoader("https://www.langchain.com/");const sitemap = await loader.parseSitemap();console.log(sitemap);/**[ { loc: 'https://www.langchain.com/blog-detail/starting-a-career-in-design', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/blog-detail/building-a-navigation-component', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/blog-detail/guide-to-creating-a-website', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/page-1/terms-and-conditions', changefreq: '', lastmod: '', priority: '' },...42 more items] */ #### API Reference: * [SitemapLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) from `langchain/document_loaders/web/sitemap` * * * #### Help us out by providing feedback on this documentation page: [ Previous SerpAPI Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)[ Next Sonix Audio ](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Setup](#setup) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Sonix Audio Sonix Audio =========== Compatibility Only available on Node.js. This covers how to load document objects from an audio file using the [Sonix](https://sonix.ai/) API. Setup[​](#setup "Direct link to Setup") --------------------------------------- To run this loader you will need to create an account on the [https://sonix.ai/](https://sonix.ai/) and obtain an auth key from the [https://my.sonix.ai/api](https://my.sonix.ai/api) page. You'll also need to install the `sonix-speech-recognition` library: * npm * Yarn * pnpm npm install sonix-speech-recognition yarn add sonix-speech-recognition pnpm add sonix-speech-recognition Usage[​](#usage "Direct link to Usage") --------------------------------------- Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document. In the `request` parameter, you can either specify a local file by setting `audioFilePath` or a remote file using `audioUrl`. You will also need to specify the audio language. See the list of supported languages [here](https://sonix.ai/docs/api#languages). import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs); #### API Reference: * [SonixAudioTranscriptionLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_sonix_audio.SonixAudioTranscriptionLoader.html) from `langchain/document_loaders/web/sonix_audio` * * * #### Help us out by providing feedback on this documentation page: [ Previous Sitemap Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)[ Next Blockchain Data ](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/pdf/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * PDF files On this page PDF files ========= You can use this version of the popular PDFLoader in web environments. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the `splitPages` option to `false`. Setup[​](#setup "Direct link to Setup") --------------------------------------- * npm * Yarn * pnpm npm install pdf-parse yarn add pdf-parse pnpm add pdf-parse Usage[​](#usage "Direct link to Usage") --------------------------------------- import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob);const docs = await loader.load();console.log({ docs }); #### API Reference: * [WebPDFLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_pdf.WebPDFLoader.html) from `langchain/document_loaders/web/pdf` Usage, custom `pdfjs` build[​](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build") --------------------------------------------------------------------------------------------------- By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object. In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build. * npm * Yarn * pnpm npm install pdfjs-dist yarn add pdfjs-dist pnpm add pdfjs-dist import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob, { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),}); Eliminating extra spaces[​](#eliminating-extra-spaces "Direct link to Eliminating extra spaces") ------------------------------------------------------------------------------------------------ PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this: import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob, { parsedItemSeparator: "",}); * * * #### Help us out by providing feedback on this documentation page: [ Previous Notion API ](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)[ Next Recursive URL Loader ](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [Setup](#setup) * [Usage](#usage) * [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build) * [Eliminating extra spaces](#eliminating-extra-spaces) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) * [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/) * [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/) * [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/) * [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/) * [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/) * [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/) * [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/) * [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/) * [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/) * [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/) * [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/) * [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/) * [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/) * [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/) * [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/) * [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/) * [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/) * [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/) * [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/) * [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/) * [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/) * [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/) * [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/) * [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/) * [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/) * [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/) * Blockchain Data Blockchain Data =============== This example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API. You will need a free Sort API key, visiting sort.xyz to obtain one. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "@langchain/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. "; const nftResponse = await model.invoke( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. "; const latestResponse = await model.invoke( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. "; const sqlResponse = await model.invoke( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);}; #### API Reference: * [SortXYZBlockchainLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_sort_xyz_blockchain.SortXYZBlockchainLoader.html) from `langchain/document_loaders/web/sort_xyz_blockchain` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * * * #### Help us out by providing feedback on this documentation page: [ Previous Sonix Audio ](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)[ Next YouTube transcripts ](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Memory](/v0.1/docs/modules/memory/) * [\[Beta\] Memory](/v0.1/docs/modules/memory/) * [Chat Message History](/v0.1/docs/modules/memory/chat_messages/) * [Memory types](/v0.1/docs/modules/memory/types/) * [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/) * [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/) * [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/) * [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/) * [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/) * [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/) * [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/) * [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [Experimental](/v0.1/docs/modules/experimental/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * More * [Memory](/v0.1/docs/modules/memory/) * [Memory types](/v0.1/docs/modules/memory/types/) * Vector store-backed memory Vector store-backed memory ========================== `VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called. This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions. In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { PromptTemplate } from "@langchain/core/prompts";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.invoke({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.invoke({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.invoke({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/// Sometimes we might want to save metadata along with the conversation snippetsconst memoryWithMetadata = new VectorStoreRetrieverMemory({ vectorStoreRetriever: vectorStore.asRetriever( 1, (doc) => doc.metadata?.userId === "1" ), memoryKey: "history", metadata: { userId: "1", groupId: "42" },});await memoryWithMetadata.saveContext( { input: "Community is my favorite TV Show" }, { output: "6 seasons and a movie!" });console.log( await memoryWithMetadata.loadMemoryVariables({ prompt: "what show should i watch? ", }));/*{ history: 'input: Community is my favorite TV Show\noutput: 6 seasons and a movie!' }*/// If we have a retriever whose filter does not match our metadata, our previous messages won't appearconst memoryWithoutMatchingMetadata = new VectorStoreRetrieverMemory({ vectorStoreRetriever: vectorStore.asRetriever( 1, (doc) => doc.metadata?.userId === "2" ), memoryKey: "history",});// There are no messages saved for userId 2console.log( await memoryWithoutMatchingMetadata.loadMemoryVariables({ prompt: "what show should i watch? ", }));/*{ history: '' }*/// If we need the metadata to be dynamic, we can pass a function insteadconst memoryWithMetadataFunction = new VectorStoreRetrieverMemory({ vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history", metadata: (inputValues, _outputValues) => ({ firstWord: inputValues?.input.split(" ")[0], // First word of the input createdAt: new Date().toLocaleDateString(), // Date when the message was saved userId: "1", // Hardcoded userId }),}); #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [VectorStoreRetrieverMemory](https://api.js.langchain.com/classes/langchain_memory.VectorStoreRetrieverMemory.html) from `langchain/memory` * [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains` * [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * * * #### Help us out by providing feedback on this documentation page: [ Previous Conversation summary buffer memory ](/v0.1/docs/modules/memory/types/summary_buffer/)[ Next Callbacks ](/v0.1/docs/modules/callbacks/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/callbacks/how_to/background_callbacks/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Memory](/v0.1/docs/modules/memory/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [How-to](/v0.1/docs/modules/callbacks/how_to/background_callbacks/) * [Backgrounding callbacks](/v0.1/docs/modules/callbacks/how_to/background_callbacks/) * [Creating custom callback handlers](/v0.1/docs/modules/callbacks/how_to/create_handlers/) * [Callbacks in custom Chains](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/) * [Tags](/v0.1/docs/modules/callbacks/how_to/tags/) * [Listeners](/v0.1/docs/modules/callbacks/how_to/with_listeners/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [Experimental](/v0.1/docs/modules/experimental/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * More * [Callbacks](/v0.1/docs/modules/callbacks/) * How-to * Backgrounding callbacks Backgrounding callbacks ======================= By default callbacks run in-line with the your chain/LLM run. This means that if you have a slow callback you can see an impact on the overall latency of your runs. You can make callbacks not be awaited by setting the environment variable `LANGCHAIN_CALLBACKS_BACKGROUND=true`. This will cause the callbacks to be run in the background, and will not impact the overall latency of your runs. When you do this you might need to await all pending callbacks before exiting your application. You can do this with the following method: import { awaitAllCallbacks } from "@langchain/core/callbacks/promises";await awaitAllCallbacks(); #### API Reference: * [awaitAllCallbacks](https://api.js.langchain.com/functions/langchain_core_callbacks_promises.awaitAllCallbacks.html) from `@langchain/core/callbacks/promises` * * * #### Help us out by providing feedback on this documentation page: [ Previous Callbacks ](/v0.1/docs/modules/callbacks/)[ Next Creating custom callback handlers ](/v0.1/docs/modules/callbacks/how_to/create_handlers/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/experimental/mask/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Memory](/v0.1/docs/modules/memory/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [Experimental](/v0.1/docs/modules/experimental/) * [Masking](/v0.1/docs/modules/experimental/mask/) * [Prompts](/v0.1/docs/modules/experimental/prompts/custom_formats/) * [Experimental](/v0.1/docs/modules/experimental/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * More * [Experimental](/v0.1/docs/modules/experimental/) * Masking On this page Masking ======= The experimental masking parser and transformer is an extendable module for masking and rehydrating strings. One of the primary use cases for this module is to redact PII (Personal Identifiable Information) from a string before making a call to an llm. ### Real world scenario[​](#real-world-scenario "Direct link to Real world scenario") A customer support system receives messages containing sensitive customer information. The system must parse these messages, mask any PII (like names, email addresses, and phone numbers), and log them for analysis while complying with privacy regulations. Before logging the transcript a summary is generated using an llm. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- ### Basic Example[​](#basic-example "Direct link to Basic Example") Use the RegexMaskingTransformer to create a simple mask for email and phone. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { MaskingParser, RegexMaskingTransformer,} from "langchain/experimental/masking";// Define masking strategyconst emailMask = () => `[email-${Math.random().toString(16).slice(2)}]`;const phoneMask = () => `[phone-${Math.random().toString(16).slice(2)}]`;// Configure pii transformerconst piiMaskingTransformer = new RegexMaskingTransformer({ email: { regex: /\S+@\S+\.\S+/g, mask: emailMask }, phone: { regex: /\d{3}-\d{3}-\d{4}/g, mask: phoneMask },});const maskingParser = new MaskingParser({ transformers: [piiMaskingTransformer],});maskingParser.addTransformer(piiMaskingTransformer);const input = "Contact me at jane.doe@email.com or 555-123-4567. Also reach me at john.smith@email.com";const masked = await maskingParser.mask(input);console.log(masked);// Contact me at [email-a31e486e324f6] or [phone-da8fc1584f224]. Also reach me at [email-d5b6237633d95]const rehydrated = await maskingParser.rehydrate(masked);console.log(rehydrated);// Contact me at jane.doe@email.com or 555-123-4567. Also reach me at john.smith@email.com #### API Reference: * [MaskingParser](https://api.js.langchain.com/classes/langchain_experimental_masking.MaskingParser.html) from `langchain/experimental/masking` * [RegexMaskingTransformer](https://api.js.langchain.com/classes/langchain_experimental_masking.RegexMaskingTransformer.html) from `langchain/experimental/masking` note If you plan on storing the masking state to rehydrate the original values asynchronously ensure you are following best security practices. In most cases you will want to define a custom hashing and salting strategy. ### Next.js stream[​](#nextjs-stream "Direct link to Next.js stream") Example nextjs chat endpoint leveraging the RegexMaskingTransformer. The current chat message and chat message history are masked every time the api is called with a chat payload. // app/api/chatimport { MaskingParser, RegexMaskingTransformer,} from "langchain/experimental/masking";import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { BytesOutputParser } from "@langchain/core/output_parsers";export const runtime = "edge";// Function to format chat messages for consistencyconst formatMessage = (message: any) => `${message.role}: ${message.content}`;const CUSTOMER_SUPPORT = `You are a customer support summarizer agent. Always include masked PII in your response. Current conversation: {chat_history} User: {input} AI:`;// Configure Masking Parserconst maskingParser = new MaskingParser();// Define transformations for masking emails and phone numbers using regular expressionsconst piiMaskingTransformer = new RegexMaskingTransformer({ email: { regex: /\S+@\S+\.\S+/g }, // If a regex is provided without a mask we fallback to a simple default hashing function phone: { regex: /\d{3}-\d{3}-\d{4}/g },});maskingParser.addTransformer(piiMaskingTransformer);export async function POST(req: Request) { try { const body = await req.json(); const messages = body.messages ?? []; const formattedPreviousMessages = messages.slice(0, -1).map(formatMessage); const currentMessageContent = messages[messages.length - 1].content; // Extract the content of the last message // Mask sensitive information in the current message const guardedMessageContent = await maskingParser.mask( currentMessageContent ); // Mask sensitive information in the chat history const guardedHistory = await maskingParser.mask( formattedPreviousMessages.join("\n") ); const prompt = PromptTemplate.fromTemplate(CUSTOMER_SUPPORT); const model = new ChatOpenAI({ temperature: 0.8 }); // Initialize an output parser that handles serialization and byte-encoding for streaming const outputParser = new BytesOutputParser(); const chain = prompt.pipe(model).pipe(outputParser); // Chain the prompt, model, and output parser together console.log("[GUARDED INPUT]", guardedMessageContent); // Contact me at -1157967895 or -1626926859. console.log("[GUARDED HISTORY]", guardedHistory); // user: Contact me at -1157967895 or -1626926859. assistant: Thank you for providing your contact information. console.log("[STATE]", maskingParser.getState()); // { '-1157967895' => 'jane.doe@email.com', '-1626926859' => '555-123-4567'} // Stream the AI response based on the masked chat history and current message const stream = await chain.stream({ chat_history: guardedHistory, input: guardedMessageContent, }); return new Response(stream, { headers: { "content-type": "text/plain; charset=utf-8" }, }); } catch (e: any) { return new Response(JSON.stringify({ error: e.message }), { status: 500, headers: { "content-type": "application/json", }, }); }} #### API Reference: * [MaskingParser](https://api.js.langchain.com/classes/langchain_experimental_masking.MaskingParser.html) from `langchain/experimental/masking` * [RegexMaskingTransformer](https://api.js.langchain.com/classes/langchain_experimental_masking.RegexMaskingTransformer.html) from `langchain/experimental/masking` * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [BytesOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html) from `@langchain/core/output_parsers` ### Kitchen sink[​](#kitchen-sink "Direct link to Kitchen sink") import { MaskingParser, RegexMaskingTransformer,} from "langchain/experimental/masking";// A simple hash function for demonstration purposesfunction simpleHash(input: string): string { let hash = 0; for (let i = 0; i < input.length; i += 1) { const char = input.charCodeAt(i); hash = (hash << 5) - hash + char; hash |= 0; // Convert to 32bit integer } return hash.toString(16);}const emailMask = (match: string) => `[email-${simpleHash(match)}]`;const phoneMask = (match: string) => `[phone-${simpleHash(match)}]`;const nameMask = (match: string) => `[name-${simpleHash(match)}]`;const ssnMask = (match: string) => `[ssn-${simpleHash(match)}]`;const creditCardMask = (match: string) => `[creditcard-${simpleHash(match)}]`;const passportMask = (match: string) => `[passport-${simpleHash(match)}]`;const licenseMask = (match: string) => `[license-${simpleHash(match)}]`;const addressMask = (match: string) => `[address-${simpleHash(match)}]`;const dobMask = (match: string) => `[dob-${simpleHash(match)}]`;const bankAccountMask = (match: string) => `[bankaccount-${simpleHash(match)}]`;// Regular expressions for different types of PIIconst patterns = { email: { regex: /\S+@\S+\.\S+/g, mask: emailMask }, phone: { regex: /\b\d{3}-\d{3}-\d{4}\b/g, mask: phoneMask }, name: { regex: /\b[A-Z][a-z]+ [A-Z][a-z]+\b/g, mask: nameMask }, ssn: { regex: /\b\d{3}-\d{2}-\d{4}\b/g, mask: ssnMask }, creditCard: { regex: /\b(?:\d{4}[ -]?){3}\d{4}\b/g, mask: creditCardMask }, passport: { regex: /(?i)\b[A-Z]{1,2}\d{6,9}\b/g, mask: passportMask }, license: { regex: /(?i)\b[A-Z]{1,2}\d{6,8}\b/g, mask: licenseMask }, address: { regex: /\b\d{1,5}\s[A-Z][a-z]+(?:\s[A-Z][a-z]+)\*\b/g, mask: addressMask, }, dob: { regex: /\b\d{4}-\d{2}-\d{2}\b/g, mask: dobMask }, bankAccount: { regex: /\b\d{8,17}\b/g, mask: bankAccountMask },};// Create a RegexMaskingTransformer with multiple patternsconst piiMaskingTransformer = new RegexMaskingTransformer(patterns);// Hooks for different stages of masking and rehydratingconst onMaskingStart = (message: string) => console.log(`Starting to mask message: ${message}`);const onMaskingEnd = (maskedMessage: string) => console.log(`Masked message: ${maskedMessage}`);const onRehydratingStart = (message: string) => console.log(`Starting to rehydrate message: ${message}`);const onRehydratingEnd = (rehydratedMessage: string) => console.log(`Rehydrated message: ${rehydratedMessage}`);// Initialize MaskingParser with the transformer and hooksconst maskingParser = new MaskingParser({ transformers: [piiMaskingTransformer], onMaskingStart, onMaskingEnd, onRehydratingStart, onRehydratingEnd,});// Example message containing multiple types of PIIconst message = "Contact Jane Doe at jane.doe@email.com or 555-123-4567. Her SSN is 123-45-6789 and her credit card number is 1234-5678-9012-3456. Passport number: AB1234567, Driver's License: X1234567, Address: 123 Main St, Date of Birth: 1990-01-01, Bank Account: 12345678901234567.";// Mask and rehydrate the messagemaskingParser .mask(message) .then((maskedMessage: string) => { console.log(`Masked message: ${maskedMessage}`); return maskingParser.rehydrate(maskedMessage); }) .then((rehydratedMessage: string) => { console.log(`Final rehydrated message: ${rehydratedMessage}`); }); #### API Reference: * [MaskingParser](https://api.js.langchain.com/classes/langchain_experimental_masking.MaskingParser.html) from `langchain/experimental/masking` * [RegexMaskingTransformer](https://api.js.langchain.com/classes/langchain_experimental_masking.RegexMaskingTransformer.html) from `langchain/experimental/masking` * * * #### Help us out by providing feedback on this documentation page: [ Previous Experimental ](/v0.1/docs/modules/experimental/)[ Next Custom template formats ](/v0.1/docs/modules/experimental/prompts/custom_formats/) * [Real world scenario](#real-world-scenario) * [Get started](#get-started) * [Basic Example](#basic-example) * [Next.js stream](#nextjs-stream) * [Kitchen sink](#kitchen-sink) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/experimental/prompts/custom_formats/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Memory](/v0.1/docs/modules/memory/) * [Callbacks](/v0.1/docs/modules/callbacks/) * [Experimental](/v0.1/docs/modules/experimental/) * [Masking](/v0.1/docs/modules/experimental/mask/) * [Prompts](/v0.1/docs/modules/experimental/prompts/custom_formats/) * [Custom template formats](/v0.1/docs/modules/experimental/prompts/custom_formats/) * [Experimental](/v0.1/docs/modules/experimental/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * More * [Experimental](/v0.1/docs/modules/experimental/) * Prompts * Custom template formats On this page Alternate prompt template formats ================================= The primary template format for LangChain prompts is the simple and versatile `f-string`. LangChain.js supports [`handlebars`](https://handlebarsjs.com/) as an experimental alternative. Note that templates created this way cannot be added to the LangChain prompt hub and may have unexpected behavior if you're using tracing. Setup[​](#setup "Direct link to Setup") --------------------------------------- You'll need to install the [handlebars](https://www.npmjs.com/package/handlebars) templating engine package: npm install handlebars Usage[​](#usage "Direct link to Usage") --------------------------------------- import { HandlebarsPromptTemplate } from "langchain/experimental/prompts/handlebars";import { ChatAnthropic } from "@langchain/anthropic";import { StringOutputParser } from "@langchain/core/output_parsers";const template = `Tell me a joke about {{topic}}`;const prompt = HandlebarsPromptTemplate.fromTemplate(template);const formattedResult = await prompt.invoke({ topic: "bears" });console.log(formattedResult);/* StringPromptValue { value: 'Tell me a joke about bears' }*/const model = new ChatAnthropic();const chain = prompt.pipe(model).pipe(new StringOutputParser());const result = await chain.invoke({ topic: "bears",});console.log(result);/* Why did the bears dissolve their hockey team? Because there were too many grizzly fights!*/ #### API Reference: * [HandlebarsPromptTemplate](https://api.js.langchain.com/classes/langchain_experimental_prompts_handlebars.HandlebarsPromptTemplate.html) from `langchain/experimental/prompts/handlebars` * [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Help us out by providing feedback on this documentation page: [ Previous Masking ](/v0.1/docs/modules/experimental/mask/)[ Next Experimental ](/v0.1/docs/modules/experimental/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/quick_start/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/) * [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/) * [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/) * [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/) * [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * Quick Start On this page Quick Start =========== Language models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. LangChain provides several classes and functions to make constructing and working with prompts easy. What is a prompt template?[​](#what-is-a-prompt-template "Direct link to What is a prompt template?") ----------------------------------------------------------------------------------------------------- A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt. A prompt template can contain: * instructions to the language model, * a set of few shot examples to help the language model generate a better response, * a question to the language model. Here's a simple example: * F-String * Mustache import { PromptTemplate } from "@langchain/core/prompts";// If a template is passed in, the input variables are inferred automatically from the template.const prompt = PromptTemplate.fromTemplate( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/*You are a naming consultant for new companies.What is a good name for a company that makes colorful socks?*/ #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` import { PromptTemplate } from "@langchain/core/prompts";// If a template is passed in, the input variables are inferred automatically from the template.const prompt = PromptTemplate.fromTemplate( `You are a naming consultant for new companies.What is a good name for a company that makes {{product}}?`, { templateFormat: "mustache", });const formattedPrompt = await prompt.format({ product: "colorful socks",});/*You are a naming consultant for new companies.What is a good name for a company that makes colorful socks?*/ #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` Create a prompt template[​](#create-a-prompt-template "Direct link to Create a prompt template") ------------------------------------------------------------------------------------------------ You can create simple hardcoded prompts using the `PromptTemplate` class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt. * F-String * Mustache import { PromptTemplate } from "@langchain/core/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke.",});const formattedNoInputPrompt = await noInputPrompt.format({});console.log(formattedNoInputPrompt);// "Tell me a joke."// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke.",});const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke."// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}.",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens." #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` import { PromptTemplate } from "@langchain/core/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke.",});const formattedNoInputPrompt = await noInputPrompt.format({});console.log(formattedNoInputPrompt);// "Tell me a joke."// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {{adjective}} joke.", templateFormat: "mustache",});const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke."// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {{adjective}} joke about {{content}}.", templateFormat: "mustache",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens." #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` If you do not wish to specify `inputVariables` manually, you can also create a `PromptTemplate` using the `fromTemplate` class method. LangChain will automatically infer the `inputVariables` based on the `template` passed. * F-String * Mustache import { PromptTemplate } from "@langchain/core/prompts";const template = "Tell me a {adjective} joke about {content}.";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens." #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` import { PromptTemplate } from "@langchain/core/prompts";const template = "Tell me a {{adjective}} joke about {{content}}.";const promptTemplate = PromptTemplate.fromTemplate(template, { templateFormat: "mustache",});console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens." #### API Reference: * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` You can create custom prompt templates that format the prompt in any way you want. Chat prompt template[​](#chat-prompt-template "Direct link to Chat prompt template") ------------------------------------------------------------------------------------ [Chat Models](/v0.1/docs/modules/model_io/chat/) take a list of chat messages as input - this list is commonly referred to as a `prompt`. These chat messages differ from raw string (which you would pass into a [LLM](/v0.1/docs/modules/model_io/llms/)) in that every message is associated with a `role`. For example, in OpenAI [Chat Completion API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely. LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of `PromptTemplate` when invoking chat models to fully explore the model's potential. import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "@langchain/core/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "@langchain/core/messages"; To create a message template associated with a role, you would use the corresponding `<ROLE>MessagePromptTemplate`. For convenience, you can also declare message prompt templates as tuples. These will be coerced to the proper prompt template types: const systemTemplate = "You are a helpful assistant that translates {input_language} to {output_language}.";const humanTemplate = "{text}";const chatPrompt = ChatPromptTemplate.fromMessages([ ["system", systemTemplate], ["human", humanTemplate],]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming.",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/ You can also use `ChatPromptTemplate`'s `.formatPromptValue()` method -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an LLM or chat model. If you prefer to use the message classes, there is a `fromTemplate` method exposed on these classes. This is what it would look like: const template = "You are a helpful assistant that translates {input_language} to {output_language}.";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate); If you wanted to construct the `MessagePromptTemplate` more directly, you could create a PromptTemplate externally and then pass it in, e.g.: const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}.", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,}); **Note:** If using TypeScript, you can add typing to prompts created with `.fromMessages` by passing a type parameter like this: const chatPrompt = ChatPromptTemplate.fromMessages<{ input_language: string; output_language: string; text: string;}>([systemMessagePrompt, humanMessagePrompt]); Multi-modal prompts[​](#multi-modal-prompts "Direct link to Multi-modal prompts") --------------------------------------------------------------------------------- import { HumanMessage } from "@langchain/core/messages";import { ChatPromptTemplate } from "@langchain/core/prompts";import fs from "node:fs/promises";const hotdogImage = await fs.readFile("hotdog.jpg");const base64Image = hotdogImage.toString("base64");const imageURL = "https://avatars.githubusercontent.com/u/126733545?s=200&v=4";const langchainLogoMessage = new HumanMessage({ content: [ { type: "image_url", image_url: { url: "{imageURL}", detail: "high", }, }, ],});const base64ImageMessage = new HumanMessage({ content: [ { type: "image_url", image_url: "data:image/jpeg;base64,{base64Image}", }, ],});const multiModalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You have 20:20 vision! Describe the user's image."], langchainLogoMessage, base64ImageMessage,]);const formattedPrompt = await multiModalPrompt.invoke({ imageURL, base64Image,});console.log(JSON.stringify(formattedPrompt, null, 2));/**{ "kwargs": { "messages": [ { "kwargs": { "content": "You have 20:20 vision! Describe the user's image.", } }, { "kwargs": { "content": [ { "type": "image_url", "image_url": { "url": "https://avatars.githubusercontent.com/u/126733545?s=200&v=4", "detail": "high" } } ], } }, { "kwargs": { "content": [ { "type": "image_url", "image_url": "data:image/png;base64,/9j/4AAQSkZJRgABAQEBLAEsAAD/4Q..." } ], } } ] }} */ #### API Reference: * [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages` * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` tip LangSmith will render your images inside traces! See the LangSmith trace [here](https://smith.langchain.com/public/15f4b4e4-2b2f-476a-952c-b9abcb9ac278/r) You can also pass multi-modal prompt templates inline: import { ChatPromptTemplate } from "@langchain/core/prompts";import fs from "node:fs/promises";const hotdogImage = await fs.readFile("hotdog.jpg");// Convert the image to base64const base64Image = hotdogImage.toString("base64");const imageURL = "https://avatars.githubusercontent.com/u/126733545?s=200&v=4";const multiModalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You have 20:20 vision! Describe the user's image."], [ "human", [ { type: "image_url", image_url: { url: "{imageURL}", detail: "high", }, }, { type: "image_url", image_url: "data:image/jpeg;base64,{base64Image}", }, ], ],]);const formattedPrompt = await multiModalPrompt.invoke({ imageURL, base64Image,});console.log(JSON.stringify(formattedPrompt, null, 2));/**{ "kwargs": { "messages": [ { "kwargs": { "content": "You have 20:20 vision! Describe the user's image.", } }, { "kwargs": { "content": [ { "type": "image_url", "image_url": { "url": "https://avatars.githubusercontent.com/u/126733545?s=200&v=4", "detail": "high" } }, { "type": "image_url", "image_url": { "url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEBLAEsAAD/4QBWRX...", } } ], } } ] }} */ #### API Reference: * [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts` tip See the LangSmith trace [here](https://smith.langchain.com/public/66bf8258-fa1c-42a3-9e14-9b3eb5902435/r) * * * #### Help us out by providing feedback on this documentation page: [ Previous Prompts ](/v0.1/docs/modules/model_io/prompts/)[ Next Example selectors ](/v0.1/docs/modules/model_io/prompts/example_selector_types/) * [What is a prompt template?](#what-is-a-prompt-template) * [Create a prompt template](#create-a-prompt-template) * [Chat prompt template](#chat-prompt-template) * [Multi-modal prompts](#multi-modal-prompts) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/example_selector_types/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/) * [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/) * [Select by length](/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/) * [Select by similarity](/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/) * [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/) * [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/) * [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * Example selectors Example selectors ================= If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so. The base interface is defined as below: If you have a large number of examples, you may need to programmatically select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below. class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;} It needs to expose a `selectExamples` - this takes in the input variables and then returns a list of examples method - and an `addExample` method, which saves an example for later selection. It is up to each specific implementation as to how those examples are saved and selected. * * * #### Help us out by providing feedback on this documentation page: [ Previous Quick Start ](/v0.1/docs/modules/model_io/prompts/quick_start/)[ Next Select by length ](/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/few_shot/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/) * [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/) * [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/) * [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/) * [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * Few Shot Prompt Templates On this page Few Shot Prompt Templates ========================= Few shot prompting is a prompting technique which provides the Large Language Model (LLM) with a list of examples, and then asks the LLM to generate some text following the lead of the examples provided. An example of this is the following: Say you want your LLM to respond in a specific format. You can few shot prompt the LLM with a list of question answer pairs so it knows what format to respond in. Respond to the users question in the with the following format:Question: What is your name?Answer: My name is John.Question: What is your age?Answer: I am 25 years old.Question: What is your favorite color?Answer: Here we left the last `Answer:` undefined so the LLM can fill it in. The LLM will then generate the following: Answer: I don't have a favorite color; I don't have preferences. ### Use Case[​](#use-case "Direct link to Use Case") In the following example we're few shotting the LLM to rephrase questions into more general queries. We provide two sets of examples with specific questions, and rephrased general questions. The `FewShotChatMessagePromptTemplate` will use our examples and when `.format` is called, we'll see those examples formatted into a string we can pass to the LLM. import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables}); const formattedPrompt = await fewShotPrompt.format({});console.log(formattedPrompt); [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }] Then, if we use this with another question, the LLM will rephrase the question how we want. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({});const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ prefix: "Rephrase the users query to be more general, using the following examples", suffix: "Human: {input}", examplePrompt, examples, inputVariables: ["input"],});const formattedPrompt = await fewShotPrompt.format({ input: "What's France's main city?",});const response = await model.invoke(formattedPrompt);console.log(response); AIMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'What is the capital of France?', additional_kwargs: { function_call: undefined }} ### Few Shotting With Functions[​](#few-shotting-with-functions "Direct link to Few Shotting With Functions") You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. const getCurrentDate = () => { return new Date().toISOString();};const prompt = new FewShotChatMessagePromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z ### Few Shot vs Chat Few Shot[​](#few-shot-vs-chat-few-shot "Direct link to Few Shot vs Chat Few Shot") The chat and non chat few shot prompt templates act in a similar way. The below example will demonstrate using chat and non chat, and the differences with their outputs. import { FewShotPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const prompt = `Human: {input}AI: {output}`;const examplePromptTemplate = PromptTemplate.fromTemplate(prompt);const exampleChatPromptTemplate = ChatPromptTemplate.fromTemplate(prompt);const chatFewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt: exampleChatPromptTemplate, examples, inputVariables: [], // no input variables});const fewShotPrompt = new FewShotPromptTemplate({ examplePrompt: examplePromptTemplate, examples, inputVariables: [], // no input variables}); console.log("Chat Few Shot: ", await chatFewShotPrompt.formatMessages({}));/**Chat Few Shot: [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }] */ console.log("Few Shot: ", await fewShotPrompt.formatPromptValue({}));/**Few Shot:Human: Could the members of The Police perform lawful arrests?AI: what can the members of The Police do?Human: Jan Sindel's was born in what country?AI: what is Jan Sindel's personal history? */ Here we can see the main distinctions between `FewShotChatMessagePromptTemplate` and `FewShotPromptTemplate`: input and output values. `FewShotChatMessagePromptTemplate` works by taking in a list of `ChatPromptTemplate` for examples, and its output is a list of instances of `BaseMessage`. On the other hand, `FewShotPromptTemplate` works by taking in a `PromptTemplate` for examples, and its output is a string. With Non Chat Models[​](#with-non-chat-models "Direct link to With Non Chat Models") ------------------------------------------------------------------------------------ LangChain also provides a class for few shot prompt formatting for non chat models: `FewShotPromptTemplate`. The API is largely the same, but the output is formatted differently (chat messages vs strings). ### Partials With Functions[​](#partials-with-functions "Direct link to Partials With Functions") import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examplePrompt = PromptTemplate.fromTemplate("{foo}{bar}");const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted); boobaz\n ### With Functions and Example Selector[​](#with-functions-and-example-selector "Direct link to With Functions and Example Selector") import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts"; const examplePrompt = PromptTemplate.fromTemplate("An example about {x}");const exampleSelector = await LengthBasedExampleSelector.fromExamples( [{ x: "foo" }, { x: "bar" }], { examplePrompt, maxLength: 200 });const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", exampleSelector, examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted); boobazAn example about fooAn example about bar * * * #### Help us out by providing feedback on this documentation page: [ Previous Select by similarity ](/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/)[ Next Partial prompt templates ](/v0.1/docs/modules/model_io/prompts/partial/) * [Use Case](#use-case) * [Few Shotting With Functions](#few-shotting-with-functions) * [Few Shot vs Chat Few Shot](#few-shot-vs-chat-few-shot) * [With Non Chat Models](#with-non-chat-models) * [Partials With Functions](#partials-with-functions) * [With Functions and Example Selector](#with-functions-and-example-selector) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/partial/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/) * [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/) * [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/) * [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/) * [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * Partial prompt templates On this page Partial prompt templates ======================== Like other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values. These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain. Partial With Strings[​](#partial-with-strings "Direct link to Partial With Strings") ------------------------------------------------------------------------------------ One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in the chain, but the `baz` value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz You can also just initialize the prompt with the partialed variables. const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz Partial With Functions[​](#partial-with-functions "Direct link to Partial With Functions") ------------------------------------------------------------------------------------------ You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z You can also just initialize the prompt with the partialed variables: const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, },});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z * * * #### Help us out by providing feedback on this documentation page: [ Previous Few Shot Prompt Templates ](/v0.1/docs/modules/model_io/prompts/few_shot/)[ Next Composition ](/v0.1/docs/modules/model_io/prompts/pipeline/) * [Partial With Strings](#partial-with-strings) * [Partial With Functions](#partial-with-functions) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/quick_start/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/) * [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * Quick Start On this page Quick Start =========== Language models output text. But you may often want to get more structured information than just text back. This is where output parsers come in. Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement: * `getFormatInstructions()`: A method which returns a string containing instructions for how the output of a language model should be formatted. You can inject this into your prompt if necessary. * `parse()`: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure. And then one optional one: `parseWithPrompt()`: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Get started[​](#get-started "Direct link to Get started") --------------------------------------------------------- Below we go over one useful type of output parser, the `StructuredOutputParser`. This output parser can be used when you want to return multiple fields. **Note:** If you want complex schema returned (i.e. a JSON object with arrays of strings), you can use [Zod Schema](https://zod.dev/) as [detailed here](/v0.1/docs/modules/model_io/output_parsers/types/structured/#structured-output-parser-with-zod-schema). tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website.",});const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), new OpenAI({ temperature: 0 }), parser,]);console.log(parser.getFormatInstructions());/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites."}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France?*/const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response);// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' } #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * [StructuredOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.StructuredOutputParser.html) from `langchain/output_parsers` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` LCEL[​](#lcel "Direct link to LCEL") ------------------------------------ Output parsers implement the [Runnable interface](/v0.1/docs/expression_language/interface/), the basic building block of the LangChain Expression Language (LCEL). This means they support `invoke`, `stream`, `batch`, and `streamLog` calls. Output parsers accept model outputs (a string or `BaseMessage`) as input and can return an arbitrary type. This is convenient for chaining as shown above. await parser.invoke(`\`\`\`json{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }\`\`\``);// { answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] } While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects as the model generates them, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output. * * * #### Help us out by providing feedback on this documentation page: [ Previous Output Parsers ](/v0.1/docs/modules/model_io/output_parsers/)[ Next Custom output parsers ](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Get started](#get-started) * [LCEL](#lcel) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/indexing/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * Indexing On this page Indexing ======== Here, we will look at a basic indexing workflow using the LangChain indexing API. The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps: * Avoid writing duplicated content into the vector store * Avoid re-writing unchanged content * Avoid re-computing embeddings over unchanged content All of which should save you time and money, as well as improve your vector search results. Crucially, the indexing API will work even with documents that have gone through several transformation steps (e.g., via text chunking) with respect to the original source documents. How it works[​](#how-it-works "Direct link to How it works") ------------------------------------------------------------ LangChain indexing makes use of a record manager (`RecordManager`) that keeps track of document writes into the vector store. When indexing content, hashes are computed for each document, and the following information is stored in the record manager: * the document hash (hash of both page content and metadata) * write time * the source ID - each document should include information in its metadata to allow us to determine the ultimate source of this document Deletion Modes[​](#deletion-modes "Direct link to Deletion Modes") ------------------------------------------------------------------ When indexing documents into a vector store, it's possible that some existing documents in the vector store should be deleted. In certain situations you may want to remove any existing documents that are derived from the same sources as the new documents being indexed. In others you may want to delete all existing documents wholesale. The indexing API deletion modes let you pick the behavior you want: Cleanup Mode De-Duplicates Content Parallelizable Cleans Up Deleted Source Docs Cleans Up Mutations of Source Docs and/or Derived Docs Clean Up Timing None ✅ ✅ ❌ ❌ \- Incremental ✅ ✅ ❌ ✅ Continuously Full ✅ ❌ ✅ ✅ At end of indexing `None` does not do any automatic clean up, allowing the user to manually do clean up of old content. `incremental` and `full` offer the following automated clean up: * If the content of the source document or derived documents has changed, both `incremental` or `full` modes will clean up (delete) previous versions of the content. * If the source document has been deleted (meaning it is not included in the documents currently being indexed), the full cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not. When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted. * `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes. * `full` mode does the clean up after all batches have been written. Requirements[​](#requirements "Direct link to Requirements") ------------------------------------------------------------ 1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously. 2. Only works with LangChain `vectorstore`'s that support: a). document addition by id (`addDocuments` method with ids argument) b). delete by id (delete method with ids argument) Compatible Vectorstores: [`PGVector`](/v0.1/docs/integrations/vectorstores/pgvector/), [`Chroma`](/v0.1/docs/integrations/vectorstores/chroma/), [`CloudflareVectorize`](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/), [`ElasticVectorSearch`](/v0.1/docs/integrations/vectorstores/elasticsearch/), [`FAISS`](/v0.1/docs/integrations/vectorstores/faiss/), [`MomentoVectorIndex`](/v0.1/docs/integrations/vectorstores/momento_vector_index/), [`Pinecone`](/v0.1/docs/integrations/vectorstores/pinecone/), [`SupabaseVectorStore`](/v0.1/docs/integrations/vectorstores/supabase/), [`VercelPostgresVectorStore`](/v0.1/docs/integrations/vectorstores/vercel_postgres/), [`Weaviate`](/v0.1/docs/integrations/vectorstores/weaviate/), [`Xata`](/v0.1/docs/integrations/vectorstores/xata/) Caution[​](#caution "Direct link to Caution") --------------------------------------------- The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes). If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content. This is unlikely to be an issue in actual settings for the following reasons: 1. The `RecordManager` uses higher resolution timestamps. 2. The data would need to change between the first and the second tasks runs, which becomes unlikely if the time interval between the tasks is small. 3. Indexing tasks typically take more than a few ms. Quickstart[​](#quickstart "Direct link to Quickstart") ------------------------------------------------------ import { PostgresRecordManager } from "@langchain/community/indexes/postgres";import { index } from "langchain/indexes";import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";import { PoolConfig } from "pg";import { OpenAIEmbeddings } from "@langchain/openai";import { CharacterTextSplitter } from "langchain/text_splitter";import { BaseDocumentLoader } from "langchain/document_loaders/base";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst config = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "testlangchain", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },};const vectorStore = await PGVectorStore.initialize( new OpenAIEmbeddings(), config);// Create a new record managerconst recordManagerConfig = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "upsertion_records",};const recordManager = new PostgresRecordManager( "test_namespace", recordManagerConfig);// Create the schema if it doesn't existawait recordManager.createSchema();// Index some documentsconst doc1 = { pageContent: "kitty", metadata: { source: "kitty.txt" },};const doc2 = { pageContent: "doggy", metadata: { source: "doggy.txt" },};/** * Hacky helper method to clear content. See the `full` mode section to to understand why it works. */async function clear() { await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, });}// No cleanupawait clear();// This mode does not do automatic clean up of old versions of content; however, it still takes care of content de-duplication.console.log( await index({ docsSource: [doc1, doc1, doc1, doc1, doc1, doc1], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/await clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Second time around all content will be skippedconsole.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// Updated content will be added, but old won't be deletedconst doc1Updated = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [doc1Updated, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 1, }*//*Resulting records in the database: [ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, { pageContent: "kitty updated", metadata: { source: "kitty.txt" }, } ]*/// Incremental modeawait clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Indexing again should result in both documents getting skipped – also skipping the embedding operation!console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// If we provide no documents with incremental indexing mode, nothing will change.console.log( await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// If we mutate a document, the new version will be written and all old versions sharing the same source will be deleted.// This only affects the documents with the same source id!const changedDoc1 = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [changedDoc1], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 1, numSkipped: 0, }*/// Full modeawait clear();// In full mode the user should pass the full universe of content that should be indexed into the indexing function.// Any documents that are not passed into the indexing function and are present in the vectorStore will be deleted!// This behavior is useful to handle deletions of source documents.const allDocs = [doc1, doc2];console.log( await index({ docsSource: allDocs, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Say someone deleted the first doc:const doc2Only = [doc2];// Using full mode will clean up the deleted content as well.// This afffects all documents regardless of source id!console.log( await index({ docsSource: doc2Only, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 1, numSkipped: 1, }*/await clear();const newDoc1 = { pageContent: "kitty kitty kitty kitty kitty", metadata: { source: "kitty.txt" },};const newDoc2 = { pageContent: "doggy doggy the doggy", metadata: { source: "doggy.txt" },};const splitter = new CharacterTextSplitter({ separator: "t", keepSeparator: true, chunkSize: 12, chunkOverlap: 2,});const newDocs = await splitter.splitDocuments([newDoc1, newDoc2]);console.log(newDocs);/*[ { pageContent: 'kitty kit', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty ki', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty', metadata: {source: 'kitty.txt'}, }, { pageContent: 'doggy doggy', metadata: {source: 'doggy.txt'}, { pageContent: 'the doggy', metadata: {source: 'doggy.txt'}, }]*/console.log( await index({ docsSource: newDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 5, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/const changedDoggyDocs = [ { pageContent: "woof woof", metadata: { source: "doggy.txt" }, }, { pageContent: "woof woof woof", metadata: { source: "doggy.txt" }, },];console.log( await index({ docsSource: changedDoggyDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 2, numSkipped: 0,}*/// Usage with document loaders// Create a document loaderclass MyCustomDocumentLoader extends BaseDocumentLoader { load() { return Promise.resolve([ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, ]); }}await clear();const loader = new MyCustomDocumentLoader();console.log( await index({ docsSource: loader, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/// Closing resourcesawait recordManager.end();await vectorStore.end(); #### API Reference: * [PostgresRecordManager](https://api.js.langchain.com/classes/langchain_community_indexes_postgres.PostgresRecordManager.html) from `@langchain/community/indexes/postgres` * [index](https://api.js.langchain.com/functions/langchain_indexes.index.html) from `langchain/indexes` * [PGVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [CharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `langchain/text_splitter` * [BaseDocumentLoader](https://api.js.langchain.com/classes/langchain_document_loaders_base.BaseDocumentLoader.html) from `langchain/document_loaders/base` * * * #### Help us out by providing feedback on this documentation page: [ Previous Custom vectorstores ](/v0.1/docs/modules/data_connection/vectorstores/custom/)[ Next Google Vertex AI ](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [How it works](#how-it-works) * [Deletion Modes](#deletion-modes) * [Requirements](#requirements) * [Caution](#caution) * [Quickstart](#quickstart) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/experimental/graph_databases/memgraph/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Multimodal embedding models](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Graph databases](/v0.1/docs/modules/data_connection/experimental/graph_databases/memgraph/) * [Memgraph](/v0.1/docs/modules/data_connection/experimental/graph_databases/memgraph/) * [Neo4j](/v0.1/docs/modules/data_connection/experimental/graph_databases/neo4j/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * Experimental * Graph databases * Memgraph On this page Memgraph ======== Setup[​](#setup "Direct link to Setup") --------------------------------------- ### Install LangChain dependencies[​](#install-langchain-dependencies "Direct link to Install LangChain dependencies") Install the dependencies needed for Memgraph: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai neo4j-driver @langchain/community yarn add @langchain/openai neo4j-driver @langchain/community pnpm add @langchain/openai neo4j-driver @langchain/community ### Install Memgraph[​](#install-memgraph "Direct link to Install Memgraph") Memgraph bundles the database along with various analytical tools into distinct Docker images. If you're new to Memgraph or you're in a developing stage, we recommend running Memgraph Platform with Docker Compose. Besides the database, it also includes all the tools you might need to analyze your data, such as command-line interface [mgconsole](https://memgraph.com/docs/getting-started/cli), web interface [Memgraph Lab](https://memgraph.com/docs/data-visualization) and a complete set of algorithms within a [MAGE](https://memgraph.com/docs/advanced-algorithms) library. With the Docker running in the background, run the following command in the console: Linux/MacOS: curl https://install.memgraph.com | sh Windows: iwr https://windows.memgraph.com | iex For other options of installation, check the [Getting started guide](https://memgraph.com/docs/getting-started). Usage[​](#usage "Direct link to Usage") --------------------------------------- The example below shows how to instantiate Memgraph graph, create a small database and retrieve information from the graph by generating Cypher query language statements using `GraphCypherQAChain`. import { MemgraphGraph } from "@langchain/community/graphs/memgraph_graph";import { OpenAI } from "@langchain/openai";import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";/** * This example uses Memgraph database, an in-memory graph database. * To set it up follow the instructions on https://memgraph.com/docs/getting-started. */const url = "bolt://localhost:7687";const username = "";const password = "";const graph = await MemgraphGraph.initialize({ url, username, password });const model = new OpenAI({ temperature: 0 });// Populate the database with two nodes and a relationshipawait graph.query( "CREATE (c1:Character {name: 'Jon Snow'}), (c2: Character {name: 'Olly'}) CREATE (c2)-[:KILLED {count: 1, method: 'Knife'}]->(c1);");// Refresh schemaawait graph.refreshSchema();const chain = GraphCypherQAChain.fromLLM({ llm: model, graph,});const res = await chain.run("Who killed Jon Snow and how?");console.log(res);// Olly killed Jon Snow using a knife. #### API Reference: * [MemgraphGraph](https://api.js.langchain.com/classes/langchain_community_graphs_memgraph_graph.MemgraphGraph.html) from `@langchain/community/graphs/memgraph_graph` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [GraphCypherQAChain](https://api.js.langchain.com/classes/langchain_chains_graph_qa_cypher.GraphCypherQAChain.html) from `langchain/chains/graph_qa/cypher` Disclaimer ⚠️ ============= _Security note_: Make sure that the database connection uses credentials that are narrowly-scoped to only include necessary permissions. Failure to do so may result in data corruption or loss, since the calling code may attempt commands that would result in deletion, mutation of data if appropriately prompted or reading sensitive data if such data is present in the database. The best way to guard against such negative outcomes is to (as appropriate) limit the permissions granted to the credentials used with this tool. For example, [creating read only users](https://memgraph.com/docs/configuration/security#role-based-access-control-enterprise) for the database is a good way to ensure that the calling code cannot mutate or delete data. See the [security page](/v0.1/docs/security/) for more information. * * * #### Help us out by providing feedback on this documentation page: [ Previous Google Vertex AI ](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)[ Next Neo4j ](/v0.1/docs/modules/data_connection/experimental/graph_databases/neo4j/) * [Setup](#setup) * [Install LangChain dependencies](#install-langchain-dependencies) * [Install Memgraph](#install-memgraph) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/) * [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/) * [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) * [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) * [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/) * [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/) * [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/) * [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/) * [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/) * [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/) * [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/) * [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/) * [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * Output Parser Types Output Parser Types =================== This is a list of the most popular output parsers LangChain supports. The table below has various pieces of information: **Name**: The name of the output parser **Supports Streaming**: Whether the output parser supports streaming. **Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser. **Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output. **Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs. **Output Type**: The output type of the object returned by the parser. **Description**: Our commentary on this output parser and when to use it. Name Supports Streaming Has Format Instructions Calls LLM Input Type Output Type Description [String](/v0.1/docs/modules/model_io/output_parsers/types/string/) ✅ `string` or `Message` `string` Takes language model output (either an entire response or as a stream) and converts it into a string. This is useful for standardizing chat model and LLM output and makes working with chat model outputs much more convenient. [HTTPResponse](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) ✅ `string` or `Message` `binary` Allows you to stream LLM output properly formatted bytes a web [HTTP response](https://developer.mozilla.org/en-US/docs/Web/API/Response) for a variety of content types. [OpenAIFunctions](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) ✅ (Passes `functions` to model) `Message` (with `function_call`) JSON object Allows you to use OpenAI function calling to structure the return output. If you are using a model that supports function calling, this is generally the most reliable method. [CSV](/v0.1/docs/modules/model_io/output_parsers/types/csv/) ✅ `string` or `Message` `string[]` Returns a list of comma separated values. [OutputFixing](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/) ✅ `string` or `Message` Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. [Structured](/v0.1/docs/modules/model_io/output_parsers/types/structured/) ✅ `string` or `Message` `Record<string, string>` An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. [Datetime](/v0.1/docs/modules/model_io/output_parsers/types/datetime/) ✅ `string` or `Message` `Date` Parses response into a JavaScript date. * * * #### Help us out by providing feedback on this documentation page: [ Previous Custom output parsers ](/v0.1/docs/modules/model_io/output_parsers/custom/)[ Next String output parser ](/v0.1/docs/modules/model_io/output_parsers/types/string/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Chroma Self Query Retriever On this page Chroma Self Query Retriever =========================== This example shows how to use a self query retriever with a Chroma vector store. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { ChromaTranslator } from "langchain/retrievers/self_query/chroma";import { Chroma } from "@langchain/community/vectorstores/chroma";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await Chroma.fromDocuments(docs, embeddings, { collectionName: "a-movie-collection",});const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new ChromaTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [ChromaTranslator](https://api.js.langchain.com/classes/langchain_retrievers_self_query_chroma.ChromaTranslator.html) from `langchain/retrievers/self_query/chroma` * [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new ChromaTranslator(), searchParams: { filter: { rating: { $gt: 8.5, }, }, mergeFiltersOperator: "and", },}); See [the official docs](https://docs.trychroma.com/usage-guide#using-where-filters) for a full list of filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Self-querying ](/v0.1/docs/modules/data_connection/retrievers/self_query/)[ Next HNSWLib Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * HNSWLib Self Query Retriever On this page HNSWLib Self Query Retriever ============================ This example shows how to use a self query retriever with an HNSWLib vector store. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { FunctionalTranslator } from "langchain/retrievers/self_query/functional";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await HNSWLib.fromDocuments(docs, embeddings);const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib` * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [FunctionalTranslator](https://api.js.langchain.com/classes/langchain_core_structured_query.FunctionalTranslator.html) from `langchain/retrievers/self_query/functional` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(), searchParams: { filter: (doc: Document) => doc.metadata && doc.metadata.rating > 8.5, mergeFiltersOperator: "and", },}); * * * #### Help us out by providing feedback on this documentation page: [ Previous Chroma Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/)[ Next Memory Vector Store Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Pinecone Self Query Retriever On this page Pinecone Self Query Retriever ============================= This example shows how to use a self query retriever with a Pinecone vector store. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/pinecone yarn add @langchain/openai @langchain/pinecone pnpm add @langchain/openai @langchain/pinecone import { Pinecone } from "@pinecone-database/pinecone";import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { PineconeTranslator } from "langchain/retrievers/self_query/pinecone";import { PineconeStore } from "@langchain/pinecone";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */if ( !process.env.PINECONE_API_KEY || !process.env.PINECONE_ENVIRONMENT || !process.env.PINECONE_INDEX) { throw new Error( "PINECONE_ENVIRONMENT and PINECONE_API_KEY and PINECONE_INDEX must be set" );}const pinecone = new Pinecone();const index = pinecone.Index(process.env.PINECONE_INDEX);const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await PineconeStore.fromDocuments(docs, embeddings, { pineconeIndex: index,});const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new PineconeTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [PineconeTranslator](https://api.js.langchain.com/classes/langchain_retrievers_self_query_pinecone.PineconeTranslator.html) from `langchain/retrievers/self_query/pinecone` * [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new PineconeTranslator(), searchParams: { filter: { rating: { $gt: 8.5, }, }, mergeFiltersOperator: "and", },}); See the [official docs](https://docs.pinecone.io/docs/metadata-filtering) for more on how to construct metadata filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Memory Vector Store Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/)[ Next Qdrant Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Memory Vector Store Self Query Retriever On this page Memory Vector Store Self Query Retriever ======================================== This example shows how to use a self query retriever with a basic, in-memory vector store. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { MemoryVectorStore } from "langchain/vectorstores/memory";import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { FunctionalTranslator } from "langchain/retrievers/self_query/functional";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory` * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [FunctionalTranslator](https://api.js.langchain.com/classes/langchain_core_structured_query.FunctionalTranslator.html) from `langchain/retrievers/self_query/functional` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(), searchParams: { filter: (doc: Document) => doc.metadata && doc.metadata.rating > 8.5, mergeFiltersOperator: "and", },}); * * * #### Help us out by providing feedback on this documentation page: [ Previous HNSWLib Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/)[ Next Pinecone Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Qdrant Self Query Retriever On this page Qdrant Self Query Retriever =========================== This example shows how to use a self query retriever with a Qdrant vector store. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community @qdrant/js-client-rest yarn add @langchain/openai @langchain/community @qdrant/js-client-rest pnpm add @langchain/openai @langchain/community @qdrant/js-client-rest import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { QdrantVectorStore } from "@langchain/community/vectorstores/qdrant";import { QdrantTranslator } from "@langchain/community/retrievers/self_query/qdrant";import { Document } from "@langchain/core/documents";import { QdrantClient } from "@qdrant/js-client-rest";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const QDRANT_URL = "http://127.0.0.1:6333";const QDRANT_COLLECTION_NAME = "some-collection-name";const client = new QdrantClient({ url: QDRANT_URL });const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const vectorStore = await QdrantVectorStore.fromDocuments(docs, embeddings, { client, collectionName: QDRANT_COLLECTION_NAME,});const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new QdrantTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.getRelevantDocuments( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.getRelevantDocuments( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.getRelevantDocuments( "Which cool movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.getRelevantDocuments( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [QdrantVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_qdrant.QdrantVectorStore.html) from `@langchain/community/vectorstores/qdrant` * [QdrantTranslator](https://api.js.langchain.com/classes/langchain_community_retrievers_self_query_qdrant.QdrantTranslator.html) from `@langchain/community/retrievers/self_query/qdrant` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator here. * You can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new QdrantTranslator(), searchParams: { filter: { must: [ { key: "metadata.rating", range: { gt: 8.5, }, }, ], }, mergeFiltersOperator: "and", },}); See the [official docs](https://qdrant.tech/documentation/concepts/filtering/) for more on how to construct metadata filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Pinecone Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/)[ Next Supabase Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Supabase Self Query Retriever On this page Supabase Self Query Retriever ============================= This example shows how to use a self query retriever with a [Supabase](https://supabase.com) vector store. If you haven't already set up Supabase, please [follow the instructions here](/v0.1/docs/integrations/vectorstores/supabase/). Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community import { createClient } from "@supabase/supabase-js";import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { SupabaseTranslator } from "langchain/retrievers/self_query/supabase";import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. */if (!process.env.SUPABASE_URL || !process.env.SUPABASE_PRIVATE_KEY) { throw new Error( "Supabase URL or private key not set. Please set it in the .env file" );}const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";const client = createClient( process.env.SUPABASE_URL, process.env.SUPABASE_PRIVATE_KEY);const vectorStore = await SupabaseVectorStore.fromDocuments(docs, embeddings, { client,});const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. LangChain provides one here. */ structuredQueryTranslator: new SupabaseTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?");const query2 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?");console.log(query1, query2, query3, query4); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [SupabaseTranslator](https://api.js.langchain.com/classes/langchain_retrievers_self_query_supabase.SupabaseTranslator.html) from `langchain/retrievers/self_query/supabase` * [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new SupabaseTranslator(), searchParams: { filter: (rpc: SupabaseFilter) => rpc.filter("metadata->>type", "eq", "movie"),, mergeFiltersOperator: "and", }}); See the [official docs](https://postgrest.org/en/stable/references/api/tables_views.html?highlight=operators#json-columns) for more on how to construct metadata filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Qdrant Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/)[ Next Vectara Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/integrations/text_embedding/fireworks/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Providers](/v0.1/docs/integrations/platforms/) * [Providers](/v0.1/docs/integrations/platforms/) * [Anthropic](/v0.1/docs/integrations/platforms/anthropic/) * [AWS](/v0.1/docs/integrations/platforms/aws/) * [Google](/v0.1/docs/integrations/platforms/google/) * [Microsoft](/v0.1/docs/integrations/platforms/microsoft/) * [OpenAI](/v0.1/docs/integrations/platforms/openai/) * [Components](/v0.1/docs/integrations/components/) * [LLMs](/v0.1/docs/integrations/llms/) * [Chat models](/v0.1/docs/integrations/chat/) * [Document loaders](/v0.1/docs/integrations/document_loaders/) * [Document transformers](/v0.1/docs/integrations/document_transformers/) * [Document compressors](/v0.1/docs/integrations/document_compressors/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/) * [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/) * [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/) * [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/) * [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/) * [Cohere](/v0.1/docs/integrations/text_embedding/cohere/) * [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/) * [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/) * [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/) * [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/) * [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/) * [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/) * [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/) * [Minimax](/v0.1/docs/integrations/text_embedding/minimax/) * [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/) * [Nomic](/v0.1/docs/integrations/text_embedding/nomic/) * [Ollama](/v0.1/docs/integrations/text_embedding/ollama/) * [OpenAI](/v0.1/docs/integrations/text_embedding/openai/) * [Prem AI](/v0.1/docs/integrations/text_embedding/premai/) * [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/) * [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/) * [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/) * [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/) * [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/) * [Vector stores](/v0.1/docs/integrations/vectorstores/) * [Retrievers](/v0.1/docs/integrations/retrievers/) * [Tools](/v0.1/docs/integrations/tools/) * [Agents and toolkits](/v0.1/docs/integrations/toolkits/) * [Chat Memory](/v0.1/docs/integrations/chat_memory/) * [Stores](/v0.1/docs/integrations/stores/) * [](/v0.1/) * [Components](/v0.1/docs/integrations/components/) * [Text embedding models](/v0.1/docs/integrations/text_embedding/) * Fireworks On this page Fireworks ========= The `FireworksEmbeddings` class allows you to use the Fireworks AI API to generate embeddings. Setup[​](#setup "Direct link to Setup") --------------------------------------- First, sign up for a [Fireworks API key](https://fireworks.ai/) and set it as an environment variable called `FIREWORKS_API_KEY`. Next, install the `@langchain/community` package as shown below: tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/community yarn add @langchain/community pnpm add @langchain/community Usage[​](#usage "Direct link to Usage") --------------------------------------- import { FireworksEmbeddings } from "@langchain/community/embeddings/fireworks";/* Embed queries */const fireworksEmbeddings = new FireworksEmbeddings();const res = await fireworksEmbeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await fireworksEmbeddings.embedDocuments([ "Hello world", "Bye bye",]);console.log(documentRes); #### API Reference: * [FireworksEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_fireworks.FireworksEmbeddings.html) from `@langchain/community/embeddings/fireworks` * * * #### Help us out by providing feedback on this documentation page: [ Previous Cohere ](/v0.1/docs/integrations/text_embedding/cohere/)[ Next Google AI ](/v0.1/docs/integrations/text_embedding/google_generativeai/) * [Setup](#setup) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Vectara Self Query Retriever On this page Vectara Self Query Retriever ============================ This example shows how to use a self query retriever with a [Vectara](https://vectara.com/) vector store. If you haven't already set up Vectara, please [follow the instructions here](/v0.1/docs/integrations/vectorstores/vectara/). Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai @langchain/community yarn add @langchain/openai @langchain/community pnpm add @langchain/openai @langchain/community This example shows how to intialize a `SelfQueryRetriever` with a vector store: import { AttributeInfo } from "langchain/schema/query_constructor";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { OpenAI } from "@langchain/openai";import { VectaraStore } from "@langchain/community/vectorstores/vectara";import { VectaraTranslator } from "langchain/retrievers/self_query/vectara";import { FakeEmbeddings } from "@langchain/core/utils/testing";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, rating: 9.9, director: "Andrei Tarkovsky", genre: "science fiction", }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. * * We need to setup the filters in the vectara as well otherwise filter won't work. * To setup the filter in vectara, go to Data -> {your_created_corpus} -> overview * In the overview section edit the filters section and all the following attributes in * the filters. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */const config = { customerId: Number(process.env.VECTARA_CUSTOMER_ID), corpusId: Number(process.env.VECTARA_CORPUS_ID), apiKey: String(process.env.VECTARA_API_KEY), verbose: true,};const vectorStore = await VectaraStore.fromDocuments( docs, new FakeEmbeddings(), config);const llm = new OpenAI();const documentContents = "Brief summary of a movie";const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to create a basic translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new VectaraTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. */const query1 = await selfQueryRetriever.invoke( "What are some movies about dinosaurs");const query2 = await selfQueryRetriever.invoke( "I want to watch a movie rated higher than 8.5");const query3 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");const query4 = await selfQueryRetriever.invoke( "Which movies are either comedy or science fiction and are rated higher than 8.5?");console.log(query1, query2, query3, query4); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [VectaraStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_vectara.VectaraStore.html) from `@langchain/community/vectorstores/vectara` * [VectaraTranslator](https://api.js.langchain.com/classes/langchain_retrievers_self_query_vectara.VectaraTranslator.html) from `langchain/retrievers/self_query/vectara` * [FakeEmbeddings](https://api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. LangChain provides one here. */ structuredQueryTranslator: new VectaraTranslator()(), searchParams: { filter: { filter: "( doc.genre = 'science fiction' ) and ( doc.rating > 8.5 )", }, mergeFiltersOperator: "and", },}); See the [official docs](https://docs.vectara.com/) for more on how to construct metadata filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Supabase Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/)[ Next Weaviate Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/) * [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/) * [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/) * [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/) * [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) * [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) * [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * [Chroma Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/) * [HNSWLib Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/) * [Memory Vector Store Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/) * [Pinecone Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/) * [Qdrant Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/) * [Supabase Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/) * [Vectara Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/) * [Weaviate Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/) * [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/) * [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/) * [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/) * [Indexing](/v0.1/docs/modules/data_connection/indexing/) * [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Retrievers](/v0.1/docs/modules/data_connection/retrievers/) * [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/) * Weaviate Self Query Retriever On this page Weaviate Self Query Retriever ============================= This example shows how to use a self query retriever with a [Weaviate](https://weaviate.io/) vector store. If you haven't already set up Weaviate, please [follow the instructions here](/v0.1/docs/integrations/vectorstores/weaviate/). Usage[​](#usage "Direct link to Usage") --------------------------------------- This example shows how to intialize a `SelfQueryRetriever` with a vector store: Weaviate has their own standalone integration package with LangChain, accessible via [`@langchain/weaviate`](https://www.npmjs.com/package/@langchain/weaviate) on NPM! tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/weaviate @langchain/openai @langchain/community yarn add @langchain/weaviate @langchain/openai @langchain/community pnpm add @langchain/weaviate @langchain/openai @langchain/community import weaviate from "weaviate-ts-client";import { AttributeInfo } from "langchain/schema/query_constructor";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { SelfQueryRetriever } from "langchain/retrievers/self_query";import { WeaviateStore } from "@langchain/community/vectorstores/weaviate";import { WeaviateTranslator } from "langchain/retrievers/self_query/weaviate";import { Document } from "@langchain/core/documents";/** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction" }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated" }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }),];/** * Next, we define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", },];/** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. */const embeddings = new OpenAIEmbeddings();const llm = new OpenAI();const documentContents = "Brief summary of a movie";// eslint-disable-next-line @typescript-eslint/no-explicit-anyconst client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: process.env.WEAVIATE_API_KEY ? // eslint-disable-next-line @typescript-eslint/no-explicit-any new (weaviate as any).ApiKey(process.env.WEAVIATE_API_KEY) : undefined,});const vectorStore = await WeaviateStore.fromDocuments(docs, embeddings, { client, indexName: "Test", textKey: "text", metadataKeys: ["year", "director", "rating", "genre"],});const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. LangChain provides one here. */ structuredQueryTranslator: new WeaviateTranslator(),});/** * Now we can query the vector store. * We can ask questions like "Which movies are less than 90 minutes?" or "Which movies are rated higher than 8.5?". * We can also ask questions like "Which movies are either comedy or drama and are less than 90 minutes?". * The retriever will automatically convert these questions into queries that can be used to retrieve documents. * * Note that unlike other vector stores, you have to make sure each metadata keys are actually presnt in the database, * meaning that Weaviate will throw an error if the self query chain generate a query with a metadata key that does * not exist in your Weaviate database. */const query1 = await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?");const query2 = await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?");console.log(query1, query2); #### API Reference: * [AttributeInfo](https://api.js.langchain.com/classes/langchain_schema_query_constructor.AttributeInfo.html) from `langchain/schema/query_constructor` * [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai` * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [SelfQueryRetriever](https://api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html) from `langchain/retrievers/self_query` * [WeaviateStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_weaviate.WeaviateStore.html) from `@langchain/community/vectorstores/weaviate` * [WeaviateTranslator](https://api.js.langchain.com/classes/langchain_retrievers_self_query_weaviate.WeaviateTranslator.html) from `langchain/retrievers/self_query/weaviate` * [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents` You can also initialize the retriever with default search parameters that apply in addition to the generated query: const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. LangChain provides one here. */ structuredQueryTranslator: new WeaviateTranslator(), searchParams: { filter: { where: { operator: "Equal", path: ["type"], valueText: "movie", }, }, mergeFiltersOperator: "or", },}); See the [official docs](https://weaviate.io/developers/weaviate/api/graphql/filters) for more on how to construct metadata filters. * * * #### Help us out by providing feedback on this documentation page: [ Previous Vectara Self Query Retriever ](/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/)[ Next Similarity Score Threshold ](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/string/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/) * [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/) * [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) * [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) * [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/) * [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/) * [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/) * [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/) * [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/) * [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/) * [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/) * [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/) * [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * String output parser On this page String output parser ==================== The `StringOutputParser` takes language model output (either an entire response or as a stream) and converts it into a string. This is useful for standardizing chat model and LLM output. This output parser can act as a transform stream and work with streamed response chunks from a model. Usage[​](#usage "Direct link to Usage") --------------------------------------- tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { StringOutputParser } from "@langchain/core/output_parsers";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there!");for await (const chunk of stream) { console.log(chunk);}/* Hello ! How can I assist you today ?*/ #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers` * * * #### Help us out by providing feedback on this documentation page: [ Previous Output Parser Types ](/v0.1/docs/modules/model_io/output_parsers/types/)[ Next HTTP Response Output Parser ](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) * [Usage](#usage) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/http_response/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/) * [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/) * [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) * [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) * [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/) * [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/) * [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/) * [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/) * [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/) * [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/) * [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/) * [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/) * [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * HTTP Response Output Parser HTTP Response Output Parser =========================== The HTTP Response output parser allows you to stream LLM output properly formatted bytes a web [HTTP response](https://developer.mozilla.org/en-US/docs/Web/API/Response): tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { ChatOpenAI } from "@langchain/openai";import { HttpResponseOutputParser } from "langchain/output_parsers";const handler = async () => { const parser = new HttpResponseOutputParser(); const model = new ChatOpenAI({ temperature: 0 }); const stream = await model.pipe(parser).stream("Hello there!"); const httpResponse = new Response(stream, { headers: { "Content-Type": "text/plain; charset=utf-8", }, }); return httpResponse;};await handler(); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HttpResponseOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) from `langchain/output_parsers` You can also stream back chunks as an [event stream](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events): import { ChatOpenAI } from "@langchain/openai";import { HttpResponseOutputParser } from "langchain/output_parsers";const handler = async () => { const parser = new HttpResponseOutputParser({ contentType: "text/event-stream", }); const model = new ChatOpenAI({ temperature: 0 }); // Values are stringified to avoid dealing with newlines and should // be parsed with `JSON.parse()` when consuming. const stream = await model.pipe(parser).stream("Hello there!"); const httpResponse = new Response(stream, { headers: { "Content-Type": "text/event-stream", }, }); return httpResponse;};await handler(); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HttpResponseOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) from `langchain/output_parsers` Or pass a custom output parser to internally parse chunks for e.g. streaming function outputs: import { ChatOpenAI } from "@langchain/openai";import { HttpResponseOutputParser, JsonOutputFunctionsParser,} from "langchain/output_parsers";const handler = async () => { const parser = new HttpResponseOutputParser({ contentType: "text/event-stream", outputParser: new JsonOutputFunctionsParser({ diff: true }), }); const model = new ChatOpenAI({ temperature: 0 }).bind({ functions: [ { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, ], // You can set the `function_call` arg to force the model to use a function function_call: { name: "get_current_weather", }, }); const stream = await model.pipe(parser).stream("Hello there!"); const httpResponse = new Response(stream, { headers: { "Content-Type": "text/event-stream", }, }); return httpResponse;};await handler(); #### API Reference: * [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai` * [HttpResponseOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) from `langchain/output_parsers` * [JsonOutputFunctionsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputFunctionsParser.html) from `langchain/output_parsers` * * * #### Help us out by providing feedback on this documentation page: [ Previous String output parser ](/v0.1/docs/modules/model_io/output_parsers/types/string/)[ Next JSON Output Functions Parser ](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/csv/
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}()) [Skip to main content](#__docusaurus_skipToContent_fallback) LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/). [ ![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png) ](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com) [More](#) * [People](/v0.1/docs/people/) * [Community](/v0.1/docs/community/) * [Tutorials](/v0.1/docs/additional_resources/tutorials/) * [Contributing](/v0.1/docs/contributing/) [v0.1](#) * [v0.2](https://js.langchain.com/v0.2/docs/introduction) * [v0.1](/v0.1/docs/get_started/introduction/) [🦜🔗](#) * [LangSmith](https://smith.langchain.com) * [LangSmith Docs](https://docs.smith.langchain.com) * [LangChain Hub](https://smith.langchain.com/hub) * [LangServe](https://github.com/langchain-ai/langserve) * [Python Docs](https://python.langchain.com/) [Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs) Search * [Get started](/v0.1/docs/get_started/) * [Introduction](/v0.1/docs/get_started/introduction/) * [Installation](/v0.1/docs/get_started/installation/) * [Quickstart](/v0.1/docs/get_started/quickstart/) * [LangChain Expression Language](/v0.1/docs/expression_language/) * [Get started](/v0.1/docs/expression_language/get_started/) * [Why use LCEL?](/v0.1/docs/expression_language/why/) * [Interface](/v0.1/docs/expression_language/interface/) * [Streaming](/v0.1/docs/expression_language/streaming/) * [How to](/v0.1/docs/expression_language/how_to/routing/) * [Cookbook](/v0.1/docs/expression_language/cookbook/) * [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Quickstart](/v0.1/docs/modules/model_io/quick_start/) * [Concepts](/v0.1/docs/modules/model_io/concepts/) * [Prompts](/v0.1/docs/modules/model_io/prompts/) * [LLMs](/v0.1/docs/modules/model_io/llms/) * [Chat Models](/v0.1/docs/modules/model_io/chat/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/) * [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/) * [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/) * [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/) * [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/) * [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/) * [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/) * [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/) * [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/) * [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/) * [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/) * [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/) * [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/) * [Retrieval](/v0.1/docs/modules/data_connection/) * [Chains](/v0.1/docs/modules/chains/) * [Agents](/v0.1/docs/modules/agents/) * [More](/v0.1/docs/modules/memory/) * [Security](/v0.1/docs/security/) * [Guides](/v0.1/docs/guides/) * [Ecosystem](/v0.1/docs/ecosystem/) * [LangGraph](/v0.1/docs/langgraph/) * * * * * [](/v0.1/) * [Modules](/v0.1/docs/modules/) * [Model I/O](/v0.1/docs/modules/model_io/) * [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/) * [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/) * List parser List parser =========== This output parser can be used when you want to return a list of comma-separated items. tip See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages). * npm * Yarn * pnpm npm install @langchain/openai yarn add @langchain/openai pnpm add @langchain/openai import { OpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { CommaSeparatedListOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";export const run = async () => { // With a `CommaSeparatedListOutputParser`, we can parse a comma separated list. const parser = new CommaSeparatedListOutputParser(); const chain = RunnableSequence.from([ PromptTemplate.fromTemplate("List five {subject}.\n{format_instructions}"), new OpenAI({ temperature: 0 }), parser, ]); /* List five ice cream flavors. Your response should be a list of comma separated values, eg: `foo, bar, baz` */ const response = await chain.invoke({ subject: "ice cream flavors", format_instructions: parser.getFormatInstructions(), }); console.log(response); /* [ 'Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream' ] */}; #### API Reference: * [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai` * [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts` * [CommaSeparatedListOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html) from `@langchain/core/output_parsers` * [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables` * * * #### Help us out by providing feedback on this documentation page: [ Previous Combining output parsers ](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)[ Next Custom list parser ](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/) Community * [Discord](https://discord.gg/cU2adEyC7w) * [Twitter](https://twitter.com/LangChainAI) GitHub * [Python](https://github.com/langchain-ai/langchain) * [JS/TS](https://github.com/langchain-ai/langchainjs) More * [Homepage](https://langchain.com) * [Blog](https://blog.langchain.dev) Copyright © 2024 LangChain, Inc.