id
stringlengths
14
17
text
stringlengths
42
2.11k
4e9727215e95-2900
The config parameter is passed directly into the new Client() constructor of @planetscale/database, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2901
}}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @planetscale/database client instance:import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2902
}}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMotörhead MemoryNextRedis-Backed Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsPlanetScale Chat MemoryPlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an PlanetScale Database instance.Setup​You will need to install @planetscale/database in your project:npmYarnpnpmnpm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/databaseYou will also need an PlanetScale Account and a database to connect to. See instructions on PlanetScale Docs on how to create a HTTP client.Usage​Each chat history session stored in PlanetScale database must have a unique id.
4e9727215e95-2903
The config parameter is passed directly into the new Client() constructor of @planetscale/database, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2904
}}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @planetscale/database client instance:import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMotörhead MemoryNextRedis-Backed Chat Memory
4e9727215e95-2905
ModulesMemoryIntegrationsPlanetScale Chat MemoryPlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an PlanetScale Database instance.Setup​You will need to install @planetscale/database in your project:npmYarnpnpmnpm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/databaseYou will also need an PlanetScale Account and a database to connect to. See instructions on PlanetScale Docs on how to create a HTTP client.Usage​Each chat history session stored in PlanetScale database must have a unique id.
4e9727215e95-2906
The config parameter is passed directly into the new Client() constructor of @planetscale/database, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2907
}}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @planetscale/database client instance:import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMotörhead MemoryNextRedis-Backed Chat Memory
4e9727215e95-2908
PlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an PlanetScale Database instance.Setup​You will need to install @planetscale/database in your project:npmYarnpnpmnpm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/databaseYou will also need an PlanetScale Account and a database to connect to. See instructions on PlanetScale Docs on how to create a HTTP client.Usage​Each chat history session stored in PlanetScale database must have a unique id.
4e9727215e95-2909
The config parameter is passed directly into the new Client() constructor of @planetscale/database, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2910
}}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @planetscale/database client instance:import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
4e9727215e95-2911
For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an PlanetScale Database instance. You will need to install @planetscale/database in your project: npmYarnpnpmnpm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/database npm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/database npm install @planetscale/database yarn add @planetscale/database pnpm add @planetscale/database You will also need an PlanetScale Account and a database to connect to. See instructions on PlanetScale Docs on how to create a HTTP client. Each chat history session stored in PlanetScale database must have a unique id. The config parameter is passed directly into the new Client() constructor of @planetscale/database, and takes all the same arguments.
4e9727215e95-2912
import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", config: { url: "ADD_YOURS_HERE", // Override with your own database instance's URL }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ API Reference:BufferMemory from langchain/memoryPlanetScaleChatMessageHistory from langchain/stores/message/planetscaleChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains You can also directly pass in a previously created @planetscale/database client instance:
4e9727215e95-2913
You can also directly pass in a previously created @planetscale/database client instance: import { BufferMemory } from "langchain/memory";import { PlanetScaleChatMessageHistory } from "langchain/stores/message/planetscale";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { Client } from "@planetscale/database";// Create your own Planetscale database clientconst client = new Client({ url: "ADD_YOURS_HERE", // Override with your own database instance's URL});const memory = new BufferMemory({ chatHistory: new PlanetScaleChatMessageHistory({ tableName: "stored_message", sessionId: "lc-example", client, // You can reuse your existing database client }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ Redis-Backed Chat Memory Page Title: Redis-Backed Chat Memory | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2914
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsRedis-Backed Chat MemoryRedis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.Setup​You will need to install node-redis in your project:npmYarnpnpmnpm install redisyarn add redispnpm add redisYou will also need a Redis instance to connect to. See instructions on the official Redis website for running the server locally.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2915
The config parameter is passed directly into the createClient method of node-redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2916
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created node-redis client instance:import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2917
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsRedis Sentinel Support​You can enable a Redis Sentinel backed cache using ioredisThis will require the installation of ioredis in your project.npmYarnpnpmnpm install ioredisyarn add ioredispnpm add ioredisimport { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?"
4e9727215e95-2918
});console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousPlanetScale Chat MemoryNextUpstash Redis-Backed Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsRedis-Backed Chat MemoryRedis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.Setup​You will need to install node-redis in your project:npmYarnpnpmnpm install redisyarn add redispnpm add redisYou will also need a Redis instance to connect to. See instructions on the official Redis website for running the server locally.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2919
The config parameter is passed directly into the createClient method of node-redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2920
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created node-redis client instance:import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2921
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsRedis Sentinel Support​You can enable a Redis Sentinel backed cache using ioredisThis will require the installation of ioredis in your project.npmYarnpnpmnpm install ioredisyarn add ioredispnpm add ioredisimport { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?"
4e9727215e95-2922
});console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousPlanetScale Chat MemoryNextUpstash Redis-Backed Chat Memory ModulesMemoryIntegrationsRedis-Backed Chat MemoryRedis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.Setup​You will need to install node-redis in your project:npmYarnpnpmnpm install redisyarn add redispnpm add redisYou will also need a Redis instance to connect to. See instructions on the official Redis website for running the server locally.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2923
The config parameter is passed directly into the createClient method of node-redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2924
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created node-redis client instance:import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2925
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsRedis Sentinel Support​You can enable a Redis Sentinel backed cache using ioredisThis will require the installation of ioredis in your project.npmYarnpnpmnpm install ioredisyarn add ioredispnpm add ioredisimport { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?"
4e9727215e95-2926
});console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousPlanetScale Chat MemoryNextUpstash Redis-Backed Chat Memory Redis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.Setup​You will need to install node-redis in your project:npmYarnpnpmnpm install redisyarn add redispnpm add redisYou will also need a Redis instance to connect to. See instructions on the official Redis website for running the server locally.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2927
The config parameter is passed directly into the createClient method of node-redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2928
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created node-redis client instance:import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2929
}}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsRedis Sentinel Support​You can enable a Redis Sentinel backed cache using ioredisThis will require the installation of ioredis in your project.npmYarnpnpmnpm install ioredisyarn add ioredispnpm add ioredisimport { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?"
4e9727215e95-2930
});console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains You will need to install node-redis in your project: npmYarnpnpmnpm install redisyarn add redispnpm add redis npm install redisyarn add redispnpm add redis npm install redis You will also need a Redis instance to connect to. See instructions on the official Redis website for running the server locally. Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds. The config parameter is passed directly into the createClient method of node-redis, and takes all the same arguments.
4e9727215e95-2931
import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire url: "redis://localhost:6379", // Default value, override with your own instance's URL }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ API Reference:BufferMemory from langchain/memoryRedisChatMessageHistory from langchain/stores/message/ioredisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains You can also directly pass in a previously created node-redis client instance:
4e9727215e95-2932
You can also directly pass in a previously created node-redis client instance: import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const client = new Redis("redis://localhost:6379");const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ You can enable a Redis Sentinel backed cache using ioredis This will require the installation of ioredis in your project.
4e9727215e95-2933
This will require the installation of ioredis in your project. import { Redis } from "ioredis";import { BufferMemory } from "langchain/memory";import { RedisChatMessageHistory } from "langchain/stores/message/ioredis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Uses ioredis to facilitate Sentinel Connections see their docs for details on setting up more complex Sentinels: https://github.com/redis/ioredis#sentinelconst client = new Redis({ sentinels: [ { host: "localhost", port: 26379 }, { host: "localhost", port: 26380 }, ], name: "mymaster",});const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, }),});const model = new ChatOpenAI({ temperature: 0.5 });const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ Upstash Redis-Backed Chat Memory Page Title: Upstash Redis-Backed Chat Memory | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2934
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsUpstash Redis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments. Based on Redis-Backed Chat Memory.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an Upstash Redis instance.Setup​You will need to install @upstash/redis in your project:npmYarnpnpmnpm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redisYou will also need an Upstash Account and a Redis database to connect to. See instructions on Upstash Docs on how to create a HTTP client.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2935
The config parameter is passed directly into the new Redis() constructor of @upstash/redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2936
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2937
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousRedis-Backed Chat MemoryNextXata Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsUpstash Redis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments. Based on Redis-Backed Chat Memory.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an Upstash Redis instance.Setup​You will need to install @upstash/redis in your project:npmYarnpnpmnpm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redisYou will also need an Upstash Account and a Redis database to connect to. See instructions on Upstash Docs on how to create a HTTP client.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2938
The config parameter is passed directly into the new Redis() constructor of @upstash/redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2939
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2940
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousRedis-Backed Chat MemoryNextXata Chat Memory ModulesMemoryIntegrationsUpstash Redis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments. Based on Redis-Backed Chat Memory.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an Upstash Redis instance.Setup​You will need to install @upstash/redis in your project:npmYarnpnpmnpm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redisYou will also need an Upstash Account and a Redis database to connect to. See instructions on Upstash Docs on how to create a HTTP client.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2941
The config parameter is passed directly into the new Redis() constructor of @upstash/redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2942
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2943
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousRedis-Backed Chat MemoryNextXata Chat Memory Upstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments. Based on Redis-Backed Chat Memory.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an Upstash Redis instance.Setup​You will need to install @upstash/redis in your project:npmYarnpnpmnpm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redisYou will also need an Upstash Account and a Redis database to connect to. See instructions on Upstash Docs on how to create a HTTP client.Usage​Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds.
4e9727215e95-2944
The config parameter is passed directly into the new Redis() constructor of @upstash/redis, and takes all the same arguments.import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2945
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsAdvanced Usage​You can also directly pass in a previously created @upstash/redis client instance:import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2946
}}*/API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains Because Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments. Based on Redis-Backed Chat Memory. For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an Upstash Redis instance. You will need to install @upstash/redis in your project: npmYarnpnpmnpm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redis npm install @upstash/redisyarn add @upstash/redispnpm add @upstash/redis yarn add @upstash/redis pnpm add @upstash/redis You will also need an Upstash Account and a Redis database to connect to. See instructions on Upstash Docs on how to create a HTTP client. Each chat history session stored in Redis must have a unique id. You can provide an optional sessionTTL to make sessions expire after a give number of seconds. The config parameter is passed directly into the new Redis() constructor of @upstash/redis, and takes all the same arguments.
4e9727215e95-2947
import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation sessionTTL: 300, // 5 minutes, omit this parameter to make sessions never expire config: { url: "https://ADD_YOURS_HERE.upstash.io", // Override with your own instance's URL token: "********", // Override with your own instance's token }, }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ API Reference:BufferMemory from langchain/memoryUpstashRedisChatMessageHistory from langchain/stores/message/upstash_redisChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
4e9727215e95-2948
import { Redis } from "@upstash/redis";import { BufferMemory } from "langchain/memory";import { UpstashRedisChatMessageHistory } from "langchain/stores/message/upstash_redis";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Create your own Redis clientconst client = new Redis({ url: "https://ADD_YOURS_HERE.upstash.io", token: "********",});const memory = new BufferMemory({ chatHistory: new UpstashRedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, client, // You can reuse your existing Redis client }),});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ Xata Chat Memory Page Title: Xata Chat Memory | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2949
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsXata Chat MemoryOn this pageXata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
4e9727215e95-2950
UI for managing your data.With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment.Setup​Install the Xata CLI​npm install @xata.io/cli -gCreate a database to be used as a vector store​In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain.When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.Initialize the project​In your project, run:xata initand then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.Usage​Each chat history session stored in Xata database must have a unique id.In this example, the getXataClient() function is used to create a new Xata client based on the environment variables.
4e9727215e95-2951
However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI.
4e9727215e95-2952
How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataWith pre-created table​If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor.
4e9727215e95-2953
The table must have the following columns:sessionId of type Stringtype of type Stringrole of type Stringcontent of type Textname of type StringadditionalKwargs of type Textimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi!
4e9727215e95-2954
I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataPreviousUpstash Redis-Backed Chat MemoryNextZep MemorySetupInstall the Xata CLICreate a database to be used as a vector storeInitialize the projectUsageWith pre-created tableCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsXata Chat MemoryOn this pageXata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
4e9727215e95-2955
UI for managing your data.With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment.Setup​Install the Xata CLI​npm install @xata.io/cli -gCreate a database to be used as a vector store​In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain.When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.Initialize the project​In your project, run:xata initand then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.Usage​Each chat history session stored in Xata database must have a unique id.In this example, the getXataClient() function is used to create a new Xata client based on the environment variables.
4e9727215e95-2956
However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI.
4e9727215e95-2957
How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataWith pre-created table​If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor.
4e9727215e95-2958
The table must have the following columns:sessionId of type Stringtype of type Stringrole of type Stringcontent of type Textname of type StringadditionalKwargs of type Textimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi!
4e9727215e95-2959
I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataPreviousUpstash Redis-Backed Chat MemoryNextZep MemorySetupInstall the Xata CLICreate a database to be used as a vector storeInitialize the projectUsageWith pre-created table ModulesMemoryIntegrationsXata Chat MemoryOn this pageXata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
4e9727215e95-2960
UI for managing your data.With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment.Setup​Install the Xata CLI​npm install @xata.io/cli -gCreate a database to be used as a vector store​In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain.When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.Initialize the project​In your project, run:xata initand then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.Usage​Each chat history session stored in Xata database must have a unique id.In this example, the getXataClient() function is used to create a new Xata client based on the environment variables.
4e9727215e95-2961
However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI.
4e9727215e95-2962
How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataWith pre-created table​If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor.
4e9727215e95-2963
The table must have the following columns:sessionId of type Stringtype of type Stringrole of type Stringcontent of type Textname of type StringadditionalKwargs of type Textimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi!
4e9727215e95-2964
I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataPreviousUpstash Redis-Backed Chat MemoryNextZep MemorySetupInstall the Xata CLICreate a database to be used as a vector storeInitialize the projectUsageWith pre-created table ModulesMemoryIntegrationsXata Chat MemoryOn this pageXata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
4e9727215e95-2965
UI for managing your data.With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment.Setup​Install the Xata CLI​npm install @xata.io/cli -gCreate a database to be used as a vector store​In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain.When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.Initialize the project​In your project, run:xata initand then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.Usage​Each chat history session stored in Xata database must have a unique id.In this example, the getXataClient() function is used to create a new Xata client based on the environment variables.
4e9727215e95-2966
However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI.
4e9727215e95-2967
How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataWith pre-created table​If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor.
4e9727215e95-2968
The table must have the following columns:sessionId of type Stringtype of type Stringrole of type Stringcontent of type Textname of type StringadditionalKwargs of type Textimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi!
4e9727215e95-2969
I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataPreviousUpstash Redis-Backed Chat MemoryNextZep Memory Xata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
4e9727215e95-2970
UI for managing your data.With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions.Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment.Setup​Install the Xata CLI​npm install @xata.io/cli -gCreate a database to be used as a vector store​In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain.When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.Initialize the project​In your project, run:xata initand then choose the database you created above. This will also generate a xata.ts or xata.js file that defines the client you can use to interact with the database. See the Xata getting started docs for more details on using the Xata JavaScript/TypeScript SDK.Usage​Each chat history session stored in Xata database must have a unique id.In this example, the getXataClient() function is used to create a new Xata client based on the environment variables.
4e9727215e95-2971
However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI.
4e9727215e95-2972
How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xataWith pre-created table​If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor.
4e9727215e95-2973
The table must have the following columns:sessionId of type Stringtype of type Stringrole of type Stringcontent of type Textname of type StringadditionalKwargs of type Textimport { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi!
4e9727215e95-2974
I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xata Xata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data. With the XataChatMessageHistory class, you can use Xata databases for longer-term persistence of chat sessions. Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with Vercel Edge, Cloudflare Workers and any other Serverless environment. In the Xata UI create a new database. You can name it whatever you want, but for this example we'll use langchain. When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched. Each chat history session stored in Xata database must have a unique id. In this example, the getXataClient() function is used to create a new Xata client based on the environment variables. However, we recommend using the code generated by the xata init command, in which case you only need to import the getXataClient() function from the generated xata.ts file.
4e9727215e95-2975
import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), apiKey: process.env.XATA_API_KEY, // The API key is needed for creating the table. }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?"
4e9727215e95-2976
});console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsXataChatMessageHistory from langchain/stores/message/xata If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass createTable: false to the constructor. The table must have the following columns:
4e9727215e95-2977
import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { XataChatMessageHistory } from "langchain/stores/message/xata";import { BaseClient } from "@xata.io/client";// Before running this example, see the docs at// https://js.langchain.com/docs/modules/memory/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};const memory = new BufferMemory({ chatHistory: new XataChatMessageHistory({ table: "messages", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation client: getXataClient(), createTable: false, // Explicitly set to false if the table is already created }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?"
4e9727215e95-2978
}}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/ Zep Memory SetupInstall the Xata CLICreate a database to be used as a vector storeInitialize the projectUsageWith pre-created table Page Title: Zep Memory | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsZep MemoryZep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.Key Features:Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window.
4e9727215e95-2979
A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Setup​See the instructions from Zep for running the server locally or through an automated hosting provider.Usage​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "langchain/memory/zep";const sessionId = "TestSession"; // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here. See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2980
}}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({}));API Reference:ChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsZepMemory from langchain/memory/zepPreviousXata Chat MemoryNextAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsZep MemoryZep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.Key Features:Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window.
4e9727215e95-2981
A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Setup​See the instructions from Zep for running the server locally or through an automated hosting provider.Usage​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "langchain/memory/zep";const sessionId = "TestSession"; // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here. See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
4e9727215e95-2982
}}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({}));API Reference:ChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsZepMemory from langchain/memory/zepPreviousXata Chat MemoryNextAgents ModulesMemoryIntegrationsZep MemoryZep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.Key Features:Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Setup​See the instructions from Zep for running the server locally or through an automated hosting provider.Usage​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "langchain/memory/zep";const sessionId = "TestSession"; // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here.
4e9727215e95-2983
See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({}));API Reference:ChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsZepMemory from langchain/memory/zepPreviousXata Chat MemoryNextAgents
4e9727215e95-2984
Zep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.Key Features:Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Setup​See the instructions from Zep for running the server locally or through an automated hosting provider.Usage​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "langchain/memory/zep";const sessionId = "TestSession"; // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here.
4e9727215e95-2985
See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({}));API Reference:ChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsZepMemory from langchain/memory/zep Key Features: See the instructions from Zep for running the server locally or through an automated hosting provider.
4e9727215e95-2986
See the instructions from Zep for running the server locally or through an automated hosting provider. import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { ZepMemory } from "langchain/memory/zep";const sessionId = "TestSession"; // This should be unique for each user or each user's session.const zepURL = "http://localhost:8000";const memory = new ZepMemory({ sessionId, baseURL: zepURL, // This is optional. If you've enabled JWT authentication on your Zep server, you can // pass it in here. See https://docs.getzep.com/deployment/auth apiKey: "change_this_key",});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });console.log("Memory Keys:", memory.memoryKeys);const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/console.log("Session ID: ", sessionId);console.log("Memory: ", await memory.loadMemoryVariables({})); API Reference:ChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsZepMemory from langchain/memory/zep Page Title: Agents | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2987
Page Title: Agents | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsOn this pageAgentsSome applications require a flexible chain of calls to LLMs and other tools based on user input. The Agent interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.There are two main types of agents:Action agents: at each timestep, decide on the next action using the outputs of all previous actionsPlan-and-execute agents: decide on the full sequence of actions up front, then execute them all without updating the planAction agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.For a full list of agent types see agent types. Additional abstractions involved in agents are:Tools: the actions an agent can take. What tools you give an agent highly depend on what you want the agent to doToolkits: wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
4e9727215e95-2988
interact with a SQL database it will likely need one tool to execute queries and another to inspect tablesAction agents​At a high-level an action agent:Receives user inputDecides which tool, if any, to use and the tool inputCalls the tool and records the output (also known as an "observation")Decides the next step using the history of tools, tool inputs, and observationsRepeats 3-4 until it determines it can respond directly to the userAction agents are wrapped in agent executors, chains which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.Although an agent can be constructed in many ways, it typically involves these components:Prompt template: Responsible for taking the user input and previous steps and constructing a prompt
4e9727215e95-2989
to send to the language modelLanguage model: Takes the prompt with use input and action history and decides what to do nextOutput parser: Takes the output of the language model and parses it into the next action or a final answerPlan-and-execute agents​At a high-level a plan-and-execute agent:Receives user inputPlans the full sequence of steps to takeExecutes the steps in order, passing the outputs of past steps as inputs to future stepsThe most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more here.Get started​LangChain offers several types of agents. Here's an example using one powered by OpenAI functions:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const tools = [new Calculator(), new SerpAPI()];const chat = new ChatOpenAI({ modelName: "gpt-4", temperature: 0 });const executor = await initializeAgentExecutorWithOptions(tools, chat, { agentType: "openai-functions", verbose: true,});const result = await executor.run("What is the weather in New York? ");console.log(result);/* The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW. The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain.
4e9727215e95-2990
/API Reference:initializeAgentExecutorWithOptions from langchain/agentsChatOpenAI from langchain/chat_models/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorAnd here is the logged verbose output:[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in New York? ", "chat_history": []}[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "SystemMessage" ], "kwargs": { "content": "You are a helpful AI assistant. ", "additional_kwargs": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "What is the weather in New York?
4e9727215e95-2991
", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.97s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "search", "arguments": "{\n \"input\": \"current weather in New York\"\n}" } } } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 18, "promptTokens": 121, "totalTokens": 139 }
4e9727215e95-2992
}}[agent/action] [1:chain:AgentExecutor] Agent selected action: { "tool": "search", "toolInput": { "input": "current weather in New York" }, "log": ""}[tool/start] [1:chain:AgentExecutor > 3:tool:SerpAPI] Entering Tool run with input: "current weather in New York"[tool/end] [1:chain:AgentExecutor > 3:tool:SerpAPI] [1.90s] Exiting Tool run with output: "1 am · Feels Like72° · WindSSW 1 mph · Humidity89% · UV Index0 of 11 · Cloud Cover79% · Rain Amount0 in ..."[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "SystemMessage" ], "kwargs": { "content": "You are a helpful AI assistant. ", "additional_kwargs": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "What is the weather in New York?
4e9727215e95-2993
", "additional_kwargs": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "search", "arguments": "{\"input\":\"current weather in New York\"}" } } } }, { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "FunctionMessage" ], "kwargs": { "content": "1 am · Feels Like72° · WindSSW 1 mph · Humidity89% · UV Index0 of 11 · Cloud Cover79% · Rain Amount0 in ...", "name": "search", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [3.33s] Exiting LLM run with output: { "generations": [ [ { "text": "The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW. The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain. ", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW.
4e9727215e95-2994
The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain. ", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 58, "promptTokens": 180, "totalTokens": 238 } }}[chain/end] [1:chain:AgentExecutor] [7.73s] Exiting Chain run with output: { "output": "The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW. The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain. "}PreviousZep MemoryNextAgent typesAction agentsPlan-and-execute agentsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2995
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsOn this pageAgentsSome applications require a flexible chain of calls to LLMs and other tools based on user input. The Agent interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.There are two main types of agents:Action agents: at each timestep, decide on the next action using the outputs of all previous actionsPlan-and-execute agents: decide on the full sequence of actions up front, then execute them all without updating the planAction agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.For a full list of agent types see agent types. Additional abstractions involved in agents are:Tools: the actions an agent can take. What tools you give an agent highly depend on what you want the agent to doToolkits: wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
4e9727215e95-2996
interact with a SQL database it will likely need one tool to execute queries and another to inspect tablesAction agents​At a high-level an action agent:Receives user inputDecides which tool, if any, to use and the tool inputCalls the tool and records the output (also known as an "observation")Decides the next step using the history of tools, tool inputs, and observationsRepeats 3-4 until it determines it can respond directly to the userAction agents are wrapped in agent executors, chains which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.Although an agent can be constructed in many ways, it typically involves these components:Prompt template: Responsible for taking the user input and previous steps and constructing a prompt
4e9727215e95-2997
to send to the language modelLanguage model: Takes the prompt with use input and action history and decides what to do nextOutput parser: Takes the output of the language model and parses it into the next action or a final answerPlan-and-execute agents​At a high-level a plan-and-execute agent:Receives user inputPlans the full sequence of steps to takeExecutes the steps in order, passing the outputs of past steps as inputs to future stepsThe most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more here.Get started​LangChain offers several types of agents. Here's an example using one powered by OpenAI functions:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const tools = [new Calculator(), new SerpAPI()];const chat = new ChatOpenAI({ modelName: "gpt-4", temperature: 0 });const executor = await initializeAgentExecutorWithOptions(tools, chat, { agentType: "openai-functions", verbose: true,});const result = await executor.run("What is the weather in New York? ");console.log(result);/* The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW. The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain.
4e9727215e95-2998
/API Reference:initializeAgentExecutorWithOptions from langchain/agentsChatOpenAI from langchain/chat_models/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorAnd here is the logged verbose output:[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is the weather in New York? ", "chat_history": []}[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "SystemMessage" ], "kwargs": { "content": "You are a helpful AI assistant. ", "additional_kwargs": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "What is the weather in New York?
4e9727215e95-2999
", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [1.97s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "search", "arguments": "{\n \"input\": \"current weather in New York\"\n}" } } } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 18, "promptTokens": 121, "totalTokens": 139 }