id stringlengths 14 17 | text stringlengths 42 2.11k |
|---|---|
4e9727215e95-2800 | If the AI does not know the answer to a question, it truthfully says it does not know.Summary of conversation:{conversation_summary}Current conversation:{chat_history_lines}Human: {input}AI:`;const PROMPT = new PromptTemplate({ inputVariables: ["input", "conversation_summary", "chat_history_lines"], template: _DEFAULT_TEMPLATE,});const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory, prompt: PROMPT });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/* { res1: { response: "Hello Jim! It's nice to meet you. How can I assist you today?" } }*/const res2 = await chain.call({ input: "Can you tell me a joke?" });console.log({ res2 });/* { res2: { response: 'Why did the scarecrow win an award? Because he was outstanding in his field!' } }*/const res3 = await chain.call({ input: "What's my name and what joke did you just tell? ",});console.log({ res3 });/* { res3: { response: 'Your name is Jim. The joke I just told was about a scarecrow winning an award because he was outstanding in his field.' } }*/API Reference:ChatOpenAI from langchain/chat_models/openaiBufferMemory from langchain/memoryCombinedMemory from langchain/memoryConversationSummaryMemory from langchain/memoryConversationChain from langchain/chainsPromptTemplate from langchain/prompts
It is also possible to use multiple memory classes in the same chain. To combine multiple memory classes, we can initialize the CombinedMemory class, and then use that. |
4e9727215e95-2801 | import { ChatOpenAI } from "langchain/chat_models/openai";import { BufferMemory, CombinedMemory, ConversationSummaryMemory,} from "langchain/memory";import { ConversationChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";// buffer memoryconst bufferMemory = new BufferMemory({ memoryKey: "chat_history_lines", inputKey: "input",});// summary memoryconst summaryMemory = new ConversationSummaryMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), inputKey: "input", memoryKey: "conversation_summary",});//const memory = new CombinedMemory({ memories: [bufferMemory, summaryMemory],});const _DEFAULT_TEMPLATE = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Summary of conversation:{conversation_summary}Current conversation:{chat_history_lines}Human: {input}AI:`;const PROMPT = new PromptTemplate({ inputVariables: ["input", "conversation_summary", "chat_history_lines"], template: _DEFAULT_TEMPLATE,});const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory, prompt: PROMPT });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/* { res1: { response: "Hello Jim! It's nice to meet you. How can I assist you today?" |
4e9727215e95-2802 | } }*/const res2 = await chain.call({ input: "Can you tell me a joke?" });console.log({ res2 });/* { res2: { response: 'Why did the scarecrow win an award? Because he was outstanding in his field!' } }*/const res3 = await chain.call({ input: "What's my name and what joke did you just tell? ",});console.log({ res3 });/* { res3: { response: 'Your name is Jim. The joke I just told was about a scarecrow winning an award because he was outstanding in his field.' } }*/
API Reference:ChatOpenAI from langchain/chat_models/openaiBufferMemory from langchain/memoryCombinedMemory from langchain/memoryConversationSummaryMemory from langchain/memoryConversationChain from langchain/chainsPromptTemplate from langchain/prompts
Conversation summary memory
Page Title: Conversation summary memory | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toConversation summary memoryConversation summary memoryNow let's take a look at using a slightly more complex type of memory - ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. |
4e9727215e95-2803 | Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.Let's first explore the basic functionality of this type of memory.Usage, with an LLMimport { OpenAI } from "langchain/llms/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. |
4e9727215e95-2804 | I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' |
4e9727215e95-2805 | } } */};API Reference:OpenAI from langchain/llms/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsUsage, with a Chat Modelimport { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.call({ input: "What's my name?" |
4e9727215e95-2806 | }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsPreviousHow to use multiple memory classes in the same chainNextConversation summary buffer memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toConversation summary memoryConversation summary memoryNow let's take a look at using a slightly more complex type of memory - ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. |
4e9727215e95-2807 | Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.Let's first explore the basic functionality of this type of memory.Usage, with an LLMimport { OpenAI } from "langchain/llms/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. |
4e9727215e95-2808 | I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' |
4e9727215e95-2809 | } } */};API Reference:OpenAI from langchain/llms/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsUsage, with a Chat Modelimport { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.call({ input: "What's my name?" |
4e9727215e95-2810 | }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsPreviousHow to use multiple memory classes in the same chainNextConversation summary buffer memory
ModulesMemoryHow-toConversation summary memoryConversation summary memoryNow let's take a look at using a slightly more complex type of memory - ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. |
4e9727215e95-2811 | Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.Let's first explore the basic functionality of this type of memory.Usage, with an LLMimport { OpenAI } from "langchain/llms/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. |
4e9727215e95-2812 | I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' |
4e9727215e95-2813 | } } */};API Reference:OpenAI from langchain/llms/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsUsage, with a Chat Modelimport { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.call({ input: "What's my name?" |
4e9727215e95-2814 | }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsPreviousHow to use multiple memory classes in the same chainNextConversation summary buffer memory
Conversation summary memoryNow let's take a look at using a slightly more complex type of memory - ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. |
4e9727215e95-2815 | Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.Let's first explore the basic functionality of this type of memory.Usage, with an LLMimport { OpenAI } from "langchain/llms/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. |
4e9727215e95-2816 | I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' |
4e9727215e95-2817 | } } */};API Reference:OpenAI from langchain/llms/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsUsage, with a Chat Modelimport { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.call({ input: "What's my name?" |
4e9727215e95-2818 | }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/prompts
Now let's take a look at using a slightly more complex type of memory - ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.
Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens. |
4e9727215e95-2819 | import { OpenAI } from "langchain/llms/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' |
4e9727215e95-2820 | } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' } } */};
API Reference:OpenAI from langchain/llms/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/prompts |
4e9727215e95-2821 | import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.call({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" |
4e9727215e95-2822 | }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};
API Reference:ChatOpenAI from langchain/chat_models/openaiConversationSummaryMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/prompts
Conversation summary buffer memory
Page Title: ConversationSummaryBufferMemory | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toConversation summary buffer memoryConversationSummaryBufferMemoryConversationSummaryBufferMemory combines the ideas behind BufferMemory and ConversationSummaryMemory.
It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. |
4e9727215e95-2823 | Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use it:import { OpenAI } from "langchain/llms/openai";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "langchain/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ modelName: "text-davinci-003", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' |
4e9727215e95-2824 | + 'AI: not much' } }*/// We can also get the history as a list of messages (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." |
4e9727215e95-2825 | ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.predict({ input: "Hi, what's up?" });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.predict({ input: "Just working on writing some documentation! ",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.predict({ input: "For LangChain! Have you heard of it? ",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.predict({ input: "That's not the right one, although a lot of people confuse it for that! ",});console.log({ res4 });/* { res4: "I apologize for the confusion! |
4e9727215e95-2826 | Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/API Reference:OpenAI from langchain/llms/openaiChatOpenAI from langchain/chat_models/openaiConversationSummaryBufferMemory from langchain/memoryConversationChain from langchain/chainsChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsMessagesPlaceholder from langchain/promptsSystemMessagePromptTemplate from langchain/promptsPreviousConversation summary memoryNextVector store-backed memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toConversation summary buffer memoryConversationSummaryBufferMemoryConversationSummaryBufferMemory combines the ideas behind BufferMemory and ConversationSummaryMemory.
It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. |
4e9727215e95-2827 | Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use it:import { OpenAI } from "langchain/llms/openai";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "langchain/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ modelName: "text-davinci-003", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' |
4e9727215e95-2828 | + 'AI: not much' } }*/// We can also get the history as a list of messages (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." |
4e9727215e95-2829 | ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.predict({ input: "Hi, what's up?" });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.predict({ input: "Just working on writing some documentation! ",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.predict({ input: "For LangChain! Have you heard of it? ",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.predict({ input: "That's not the right one, although a lot of people confuse it for that! ",});console.log({ res4 });/* { res4: "I apologize for the confusion! |
4e9727215e95-2830 | Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/API Reference:OpenAI from langchain/llms/openaiChatOpenAI from langchain/chat_models/openaiConversationSummaryBufferMemory from langchain/memoryConversationChain from langchain/chainsChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsMessagesPlaceholder from langchain/promptsSystemMessagePromptTemplate from langchain/promptsPreviousConversation summary memoryNextVector store-backed memory
ModulesMemoryHow-toConversation summary buffer memoryConversationSummaryBufferMemoryConversationSummaryBufferMemory combines the ideas behind BufferMemory and ConversationSummaryMemory.
It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. |
4e9727215e95-2831 | Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use it:import { OpenAI } from "langchain/llms/openai";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "langchain/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ modelName: "text-davinci-003", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' |
4e9727215e95-2832 | + 'AI: not much' } }*/// We can also get the history as a list of messages (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." |
4e9727215e95-2833 | ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.predict({ input: "Hi, what's up?" });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.predict({ input: "Just working on writing some documentation! ",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.predict({ input: "For LangChain! Have you heard of it? ",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.predict({ input: "That's not the right one, although a lot of people confuse it for that! ",});console.log({ res4 });/* { res4: "I apologize for the confusion! |
4e9727215e95-2834 | Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/API Reference:OpenAI from langchain/llms/openaiChatOpenAI from langchain/chat_models/openaiConversationSummaryBufferMemory from langchain/memoryConversationChain from langchain/chainsChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsMessagesPlaceholder from langchain/promptsSystemMessagePromptTemplate from langchain/promptsPreviousConversation summary memoryNextVector store-backed memory
ConversationSummaryBufferMemoryConversationSummaryBufferMemory combines the ideas behind BufferMemory and ConversationSummaryMemory.
It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. |
4e9727215e95-2835 | Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use it:import { OpenAI } from "langchain/llms/openai";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "langchain/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ modelName: "text-davinci-003", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' |
4e9727215e95-2836 | + 'AI: not much' } }*/// We can also get the history as a list of messages (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." |
4e9727215e95-2837 | ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.predict({ input: "Hi, what's up?" });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.predict({ input: "Just working on writing some documentation! ",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.predict({ input: "For LangChain! Have you heard of it? ",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.predict({ input: "That's not the right one, although a lot of people confuse it for that! ",});console.log({ res4 });/* { res4: "I apologize for the confusion! |
4e9727215e95-2838 | Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/API Reference:OpenAI from langchain/llms/openaiChatOpenAI from langchain/chat_models/openaiConversationSummaryBufferMemory from langchain/memoryConversationChain from langchain/chainsChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsMessagesPlaceholder from langchain/promptsSystemMessagePromptTemplate from langchain/prompts
ConversationSummaryBufferMemory combines the ideas behind BufferMemory and ConversationSummaryMemory.
It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.
Let's first walk through how to use it: |
4e9727215e95-2839 | Let's first walk through how to use it:
import { OpenAI } from "langchain/llms/openai";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "langchain/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ modelName: "text-davinci-003", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' + 'AI: not much' } }*/// We can also get the history as a list of messages |
4e9727215e95-2840 | (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.predict({ input: "Hi, what's up?" |
4e9727215e95-2841 | });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.predict({ input: "Just working on writing some documentation! ",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.predict({ input: "For LangChain! Have you heard of it? ",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.predict({ input: "That's not the right one, although a lot of people confuse it for that! ",});console.log({ res4 });/* { res4: "I apologize for the confusion! Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/
API Reference:OpenAI from langchain/llms/openaiChatOpenAI from langchain/chat_models/openaiConversationSummaryBufferMemory from langchain/memoryConversationChain from langchain/chainsChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsMessagesPlaceholder from langchain/promptsSystemMessagePromptTemplate from langchain/prompts
Vector store-backed memory
Page Title: Vector store-backed memory | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2842 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toVector store-backed memoryVector store-backed memoryVectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.In this case, the "docs" are previous conversation snippets. |
4e9727215e95-2843 | This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.import { OpenAI } from "langchain/llms/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. |
4e9727215e95-2844 | The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.call({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.call({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.call({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/API Reference:OpenAI from langchain/llms/openaiVectorStoreRetrieverMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousConversation summary buffer memoryNextExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2845 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toConversation buffer memoryUsing Buffer Memory with Chat ModelsConversation buffer window memoryBuffer Window MemoryEntity memoryHow to use multiple memory classes in the same chainConversation summary memoryConversation summary buffer memoryVector store-backed memoryIntegrationsAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryHow-toVector store-backed memoryVector store-backed memoryVectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.import { OpenAI } from "langchain/llms/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. |
4e9727215e95-2846 | 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.call({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! |
4e9727215e95-2847 | I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.call({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.call({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/API Reference:OpenAI from langchain/llms/openaiVectorStoreRetrieverMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousConversation summary buffer memoryNextExamples |
4e9727215e95-2848 | ModulesMemoryHow-toVector store-backed memoryVector store-backed memoryVectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.import { OpenAI } from "langchain/llms/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. |
4e9727215e95-2849 | 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.call({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! |
4e9727215e95-2850 | I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.call({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.call({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/API Reference:OpenAI from langchain/llms/openaiVectorStoreRetrieverMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openaiPreviousConversation summary buffer memoryNextExamples
Vector store-backed memoryVectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.import { OpenAI } from "langchain/llms/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. |
4e9727215e95-2851 | 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.call({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! |
4e9727215e95-2852 | I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.call({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.call({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/API Reference:OpenAI from langchain/llms/openaiVectorStoreRetrieverMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openai
VectorStoreRetrieverMemory stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.
In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation. |
4e9727215e95-2853 | import { OpenAI } from "langchain/llms/openai";import { VectorStoreRetrieverMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "langchain/embeddings/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const memory = new VectorStoreRetrieverMemory({ // 1 is how many documents to return, you might want to return more, eg. 4 vectorStoreRetriever: vectorStore.asRetriever(1), memoryKey: "history",});// First let's save some information to memory, as it would happen when// used inside a chain.await memory.saveContext( { input: "My favorite food is pizza" }, { output: "thats good to know" });await memory.saveContext( { input: "My favorite sport is soccer" }, { output: "..." });await memory.saveContext({ input: "I don't the Celtics" }, { output: "ok" });// Now let's use the memory to retrieve the information we saved.console.log( await memory.loadMemoryVariables({ prompt: "what sport should i watch?" }));/*{ history: 'input: My favorite sport is soccer\noutput: ...' }*/// Now let's use it in a chain.const model = new OpenAI({ temperature: 0.9 });const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. |
4e9727215e95-2854 | If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:{history}(You do not need to use these pieces of information if not relevant)Current conversation:Human: {input}AI:`);const chain = new LLMChain({ llm: model, prompt, memory });const res1 = await chain.call({ input: "Hi, my name is Perry, what's up?" });console.log({ res1 });/*{ res1: { text: " Hi Perry, I'm doing great! I'm currently exploring different topics related to artificial intelligence like natural language processing and machine learning. What about you? What have you been up to lately?" }}*/const res2 = await chain.call({ input: "what's my favorite sport?" });console.log({ res2 });/*{ res2: { text: ' You said your favorite sport is soccer.' } }*/const res3 = await chain.call({ input: "what's my name?" });console.log({ res3 });/*{ res3: { text: ' Your name is Perry.' } }*/
API Reference:OpenAI from langchain/llms/openaiVectorStoreRetrieverMemory from langchain/memoryLLMChain from langchain/chainsPromptTemplate from langchain/promptsMemoryVectorStore from langchain/vectorstores/memoryOpenAIEmbeddings from langchain/embeddings/openai
Examples
Page Title: Examples: Memory | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2855 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsExamples: Memory📄️ DynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.📄️ Firestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.📄️ Momento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.📄️ Motörhead MemoryMotörhead is a memory server implemented in Rust. |
4e9727215e95-2856 | It automatically handles incremental summarization in the background and allows for stateless applications.📄️ PlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Redis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.📄️ Upstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Xata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a📄️ Zep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.PreviousVector store-backed memoryNextDynamoDB-Backed Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2857 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsExamples: Memory📄️ DynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.📄️ Firestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.📄️ Momento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.📄️ Motörhead MemoryMotörhead is a memory server implemented in Rust. |
4e9727215e95-2858 | It automatically handles incremental summarization in the background and allows for stateless applications.📄️ PlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Redis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.📄️ Upstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Xata Chat MemoryXata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a📄️ Zep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.PreviousVector store-backed memoryNextDynamoDB-Backed Chat Memory
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference |
4e9727215e95-2859 | ModulesMemoryIntegrationsExamples: Memory📄️ DynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.📄️ Firestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.📄️ Momento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.📄️ Motörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.📄️ PlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Redis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.📄️ Upstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Xata Chat MemoryXata is a serverless data platform, based on PostgreSQL.
It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a📄️ Zep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.PreviousVector store-backed memoryNextDynamoDB-Backed Chat Memory |
4e9727215e95-2860 | Examples: Memory📄️ DynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.📄️ Firestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.📄️ Momento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.📄️ Motörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.📄️ PlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Redis-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.📄️ Upstash Redis-Backed Chat MemoryBecause Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.📄️ Xata Chat MemoryXata is a serverless data platform, based on PostgreSQL.
It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a📄️ Zep MemoryZep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.
For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance. |
4e9727215e95-2861 | For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.
For distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
Because PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.
For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a Redis instance.
Because Upstash Redis works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.
Xata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
Zep is a memory server that stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, autonomous agent histories, document Q&A histories and exposes them via simple, low-latency APIs.
DynamoDB-Backed Chat Memory
Page Title: DynamoDB-Backed Chat Memory | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2862 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsDynamoDB-Backed Chat MemoryDynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.SetupFirst, install the AWS DynamoDB client in your project:npmYarnpnpmnpm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodbNext, sign into your AWS account and create a DynamoDB table. Name the table langchain, and name your partition key id. Make sure your partition key is a string. |
4e9727215e95-2863 | You can leave sort key and the other settings alone.You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.Usageimport { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "langchain/stores/message/dynamodb";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
}}*/API Reference:BufferMemory from langchain/memoryDynamoDBChatMessageHistory from langchain/stores/message/dynamodbChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousExamplesNextFirestore Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2864 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsDynamoDB-Backed Chat MemoryDynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.SetupFirst, install the AWS DynamoDB client in your project:npmYarnpnpmnpm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodbNext, sign into your AWS account and create a DynamoDB table. Name the table langchain, and name your partition key id. Make sure your partition key is a string. |
4e9727215e95-2865 | You can leave sort key and the other settings alone.You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.Usageimport { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "langchain/stores/message/dynamodb";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
}}*/API Reference:BufferMemory from langchain/memoryDynamoDBChatMessageHistory from langchain/stores/message/dynamodbChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousExamplesNextFirestore Chat Memory |
4e9727215e95-2866 | ModulesMemoryIntegrationsDynamoDB-Backed Chat MemoryDynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.SetupFirst, install the AWS DynamoDB client in your project:npmYarnpnpmnpm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodbNext, sign into your AWS account and create a DynamoDB table. Name the table langchain, and name your partition key id. Make sure your partition key is a string. |
4e9727215e95-2867 | You can leave sort key and the other settings alone.You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.Usageimport { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "langchain/stores/message/dynamodb";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
}}*/API Reference:BufferMemory from langchain/memoryDynamoDBChatMessageHistory from langchain/stores/message/dynamodbChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousExamplesNextFirestore Chat Memory |
4e9727215e95-2868 | DynamoDB-Backed Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a DynamoDB instance.SetupFirst, install the AWS DynamoDB client in your project:npmYarnpnpmnpm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodbNext, sign into your AWS account and create a DynamoDB table. Name the table langchain, and name your partition key id. Make sure your partition key is a string. |
4e9727215e95-2869 | You can leave sort key and the other settings alone.You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.Usageimport { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "langchain/stores/message/dynamodb";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryDynamoDBChatMessageHistory from langchain/stores/message/dynamodbChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
First, install the AWS DynamoDB client in your project: |
4e9727215e95-2870 | First, install the AWS DynamoDB client in your project:
npmYarnpnpmnpm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodb
npm install @aws-sdk/client-dynamodbyarn add @aws-sdk/client-dynamodbpnpm add @aws-sdk/client-dynamodb
npm install @aws-sdk/client-dynamodb
yarn add @aws-sdk/client-dynamodb
pnpm add @aws-sdk/client-dynamodb
Next, sign into your AWS account and create a DynamoDB table. Name the table langchain, and name your partition key id. Make sure your partition key is a string. You can leave sort key and the other settings alone.
You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables. |
4e9727215e95-2871 | import { BufferMemory } from "langchain/memory";import { DynamoDBChatMessageHistory } from "langchain/stores/message/dynamodb";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new DynamoDBChatMessageHistory({ tableName: "langchain", partitionKey: "id", sessionId: new Date().toISOString(), // Or some other unique identifier for the conversation config: { region: "us-east-2", credentials: { accessKeyId: "<your AWS access key id>", secretAccessKey: "<your AWS secret access key>", }, }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/
API Reference:BufferMemory from langchain/memoryDynamoDBChatMessageHistory from langchain/stores/message/dynamodbChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
Firestore Chat Memory
Page Title: Firestore Chat Memory | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2872 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsFirestore Chat MemoryFirestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.SetupFirst, install the Firebase admin package in your project:npmYarnpnpmyarn add firebase-adminyarn add firebase-adminyarn add firebase-adminGo to your the Settings icon Project settings in the Firebase console.
In the Your apps card, select the nickname of the app for which you need a config object.
Select Config from the Firebase SDK snippet pane. |
4e9727215e95-2873 | Select Config from the Firebase SDK snippet pane.
Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory.Usageimport { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "langchain/stores/message/firestore";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collectionName: "langchain", sessionId: "lc-example", userId: "a@example.com", config: { projectId: "your-project-id" }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryFirestoreChatMessageHistory from langchain/stores/message/firestoreChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsFirestore RulesIf your collection name is "chathistory," you can configure Firestore rules as follows. |
4e9727215e95-2874 | match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; }PreviousDynamoDB-Backed Chat MemoryNextMomento-Backed Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsFirestore Chat MemoryFirestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.SetupFirst, install the Firebase admin package in your project:npmYarnpnpmyarn add firebase-adminyarn add firebase-adminyarn add firebase-adminGo to your the Settings icon Project settings in the Firebase console.
In the Your apps card, select the nickname of the app for which you need a config object.
Select Config from the Firebase SDK snippet pane. |
4e9727215e95-2875 | Select Config from the Firebase SDK snippet pane.
Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory.Usageimport { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "langchain/stores/message/firestore";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collectionName: "langchain", sessionId: "lc-example", userId: "a@example.com", config: { projectId: "your-project-id" }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryFirestoreChatMessageHistory from langchain/stores/message/firestoreChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsFirestore RulesIf your collection name is "chathistory," you can configure Firestore rules as follows. |
4e9727215e95-2876 | match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; }PreviousDynamoDB-Backed Chat MemoryNextMomento-Backed Chat Memory
ModulesMemoryIntegrationsFirestore Chat MemoryFirestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.SetupFirst, install the Firebase admin package in your project:npmYarnpnpmyarn add firebase-adminyarn add firebase-adminyarn add firebase-adminGo to your the Settings icon Project settings in the Firebase console.
In the Your apps card, select the nickname of the app for which you need a config object.
Select Config from the Firebase SDK snippet pane. |
4e9727215e95-2877 | Select Config from the Firebase SDK snippet pane.
Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory.Usageimport { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "langchain/stores/message/firestore";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collectionName: "langchain", sessionId: "lc-example", userId: "a@example.com", config: { projectId: "your-project-id" }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryFirestoreChatMessageHistory from langchain/stores/message/firestoreChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsFirestore RulesIf your collection name is "chathistory," you can configure Firestore rules as follows. |
4e9727215e95-2878 | match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; }PreviousDynamoDB-Backed Chat MemoryNextMomento-Backed Chat Memory
Firestore Chat MemoryFor longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for a firestore.SetupFirst, install the Firebase admin package in your project:npmYarnpnpmyarn add firebase-adminyarn add firebase-adminyarn add firebase-adminGo to your the Settings icon Project settings in the Firebase console.
In the Your apps card, select the nickname of the app for which you need a config object.
Select Config from the Firebase SDK snippet pane. |
4e9727215e95-2879 | Select Config from the Firebase SDK snippet pane.
Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory.Usageimport { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "langchain/stores/message/firestore";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collectionName: "langchain", sessionId: "lc-example", userId: "a@example.com", config: { projectId: "your-project-id" }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:BufferMemory from langchain/memoryFirestoreChatMessageHistory from langchain/stores/message/firestoreChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsFirestore RulesIf your collection name is "chathistory," you can configure Firestore rules as follows. |
4e9727215e95-2880 | match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; }
First, install the Firebase admin package in your project:
npmYarnpnpmyarn add firebase-adminyarn add firebase-adminyarn add firebase-admin
yarn add firebase-adminyarn add firebase-adminyarn add firebase-admin
yarn add firebase-admin
Go to your the Settings icon Project settings in the Firebase console.
In the Your apps card, select the nickname of the app for which you need a config object.
Select Config from the Firebase SDK snippet pane.
Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory. |
4e9727215e95-2881 | Copy the config object snippet, then add it to your firebase functions FirestoreChatMessageHistory.
import { BufferMemory } from "langchain/memory";import { FirestoreChatMessageHistory } from "langchain/stores/message/firestore";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const memory = new BufferMemory({ chatHistory: new FirestoreChatMessageHistory({ collectionName: "langchain", sessionId: "lc-example", userId: "a@example.com", config: { projectId: "your-project-id" }, }),});const model = new ChatOpenAI();const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/
API Reference:BufferMemory from langchain/memoryFirestoreChatMessageHistory from langchain/stores/message/firestoreChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
If your collection name is "chathistory," you can configure Firestore rules as follows. |
4e9727215e95-2882 | If your collection name is "chathistory," you can configure Firestore rules as follows.
match /chathistory/{sessionId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; } match /chathistory/{sessionId}/messages/{messageId} { allow read: if request.auth.uid == resource.data.createdBy; allow write: if request.auth.uid == request.resource.data.createdBy; }
Momento-Backed Chat Memory
Page Title: Momento-Backed Chat Memory | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsMomento-Backed Chat MemoryMomento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.SetupYou will need to install the Momento Client Library in your project:npmYarnpnpmnpm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdkYou will also need an API key from Momento. You can sign up for a free account here.UsageTo distinguish one chat history session from another, we need a unique sessionId. |
4e9727215e95-2883 | You may also provide an optional sessionTtl to make sessions expire after a given number of seconds.import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "langchain/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. |
4e9727215e95-2884 | How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages());API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsMomentoChatMessageHistory from langchain/stores/message/momentoPreviousFirestore Chat MemoryNextMotörhead MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsMomento-Backed Chat MemoryMomento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.SetupYou will need to install the Momento Client Library in your project:npmYarnpnpmnpm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdkYou will also need an API key from Momento. You can sign up for a free account here.UsageTo distinguish one chat history session from another, we need a unique sessionId. |
4e9727215e95-2885 | You may also provide an optional sessionTtl to make sessions expire after a given number of seconds.import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "langchain/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. |
4e9727215e95-2886 | How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages());API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsMomentoChatMessageHistory from langchain/stores/message/momentoPreviousFirestore Chat MemoryNextMotörhead Memory
ModulesMemoryIntegrationsMomento-Backed Chat MemoryMomento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.SetupYou will need to install the Momento Client Library in your project:npmYarnpnpmnpm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdkYou will also need an API key from Momento. You can sign up for a free account here.UsageTo distinguish one chat history session from another, we need a unique sessionId. |
4e9727215e95-2887 | You may also provide an optional sessionTtl to make sessions expire after a given number of seconds.import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "langchain/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. |
4e9727215e95-2888 | How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages());API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsMomentoChatMessageHistory from langchain/stores/message/momentoPreviousFirestore Chat MemoryNextMotörhead Memory
Momento-Backed Chat MemoryFor distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.SetupYou will need to install the Momento Client Library in your project:npmYarnpnpmnpm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdkYou will also need an API key from Momento. You can sign up for a free account here.UsageTo distinguish one chat history session from another, we need a unique sessionId. |
4e9727215e95-2889 | You may also provide an optional sessionTtl to make sessions expire after a given number of seconds.import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "langchain/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. |
4e9727215e95-2890 | How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages());API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsMomentoChatMessageHistory from langchain/stores/message/momento
For distributed, serverless persistence across chat sessions, you can swap in a Momento-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.
You will need to install the Momento Client Library in your project:
npmYarnpnpmnpm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdk
npm install @gomomento/sdkyarn add @gomomento/sdkpnpm add @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
You will also need an API key from Momento. You can sign up for a free account here.
To distinguish one chat history session from another, we need a unique sessionId. You may also provide an optional sessionTtl to make sessions expire after a given number of seconds. |
4e9727215e95-2891 | import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { BufferMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { MomentoChatMessageHistory } from "langchain/stores/message/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_AUTH_TOKEN", }), defaultTtlSeconds: 60 * 60 * 24,});// Create a unique session IDconst sessionId = new Date().toISOString();const cacheName = "langchain";const memory = new BufferMemory({ chatHistory: await MomentoChatMessageHistory.fromProps({ client, cacheName, sessionId, sessionTtl: 300, }),});console.log( `cacheName=${cacheName} and sessionId=${sessionId} . This will be used to store the chat history. You can inspect the values at your Momento console at https://console.gomomento.com.`);const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" |
4e9727215e95-2892 | });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/// See the chat history in the Momentoconsole.log(await memory.chatHistory.getMessages());
API Reference:BufferMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsMomentoChatMessageHistory from langchain/stores/message/momento
Motörhead Memory
Page Title: Motörhead Memory | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsMotörhead MemoryMotörhead MemoryMotörhead is a memory server implemented in Rust. |
4e9727215e95-2893 | It automatically handles incremental summarization in the background and allows for stateless applications.SetupSee instructions at Motörhead for running the server locally, or https://getmetal.io to get API keys for the hosted version.Usageimport { MotorheadMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
}}*/API Reference:MotorheadMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMomento-Backed Chat MemoryNextPlanetScale Chat MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2894 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsMotörhead MemoryMotörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.SetupSee instructions at Motörhead for running the server locally, or https://getmetal.io to get API keys for the hosted version.Usageimport { MotorheadMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." |
4e9727215e95-2895 | });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:MotorheadMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMomento-Backed Chat MemoryNextPlanetScale Chat Memory |
4e9727215e95-2896 | ModulesMemoryIntegrationsMotörhead MemoryMotörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.SetupSee instructions at Motörhead for running the server locally, or https://getmetal.io to get API keys for the hosted version.Usageimport { MotorheadMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim."
}}*/API Reference:MotorheadMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chainsPreviousMomento-Backed Chat MemoryNextPlanetScale Chat Memory |
4e9727215e95-2897 | Motörhead MemoryMotörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.SetupSee instructions at Motörhead for running the server locally, or https://getmetal.io to get API keys for the hosted version.Usageimport { MotorheadMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/API Reference:MotorheadMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains |
4e9727215e95-2898 | See instructions at Motörhead for running the server locally, or https://getmetal.io to get API keys for the hosted version.
import { MotorheadMemory } from "langchain/memory";import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";// Managed Example (visit https://getmetal.io to get your keys)// const managedMemory = new MotorheadMemory({// memoryKey: "chat_history",// sessionId: "test",// apiKey: "MY_API_KEY",// clientId: "MY_CLIENT_ID",// });// Self Hosted Exampleconst memory = new MotorheadMemory({ memoryKey: "chat_history", sessionId: "test", url: "localhost:8080", // Required for self hosted});const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const chain = new ConversationChain({ llm: model, memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });/*{ res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }}*/const res2 = await chain.call({ input: "What did I just say my name was?" });console.log({ res2 });/*{ res1: { text: "You said your name was Jim." }}*/
API Reference:MotorheadMemory from langchain/memoryChatOpenAI from langchain/chat_models/openaiConversationChain from langchain/chains
PlanetScale Chat Memory
Page Title: PlanetScale Chat Memory | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2899 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsMemoryHow-toIntegrationsDynamoDB-Backed Chat MemoryFirestore Chat MemoryMomento-Backed Chat MemoryMotörhead MemoryPlanetScale Chat MemoryRedis-Backed Chat MemoryUpstash Redis-Backed Chat MemoryXata Chat MemoryZep MemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesMemoryIntegrationsPlanetScale Chat MemoryPlanetScale Chat MemoryBecause PlanetScale works via a REST API, you can use this with Vercel Edge, Cloudflare Workers and other Serverless environments.For longer-term persistence across chat sessions, you can swap out the default in-memory chatHistory that backs chat memory classes like BufferMemory for an PlanetScale Database instance.SetupYou will need to install @planetscale/database in your project:npmYarnpnpmnpm install @planetscale/databaseyarn add @planetscale/databasepnpm add @planetscale/databaseYou will also need an PlanetScale Account and a database to connect to. See instructions on PlanetScale Docs on how to create a HTTP client.UsageEach chat history session stored in PlanetScale database must have a unique id. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.