id
stringlengths
14
17
text
stringlengths
42
2.11k
4e9727215e95-2300
Using an LLM in isolation is fine for simple applications, but more complex applications require chaining LLMs - either with each other or with other components. LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple: import { CallbackManagerForChainRun } from "langchain/callbacks";import { BaseChain as _ } from "langchain/chains";import { BaseMemory } from "langchain/memory";import { ChainValues } from "langchain/schema";abstract class BaseChain { memory? : BaseMemory; /** * Run the core logic of this chain and return the output */ abstract _call( values: ChainValues, runManager? : CallbackManagerForChainRun ): Promise<ChainValues>; /** * Return the string type key uniquely identifying this class of chain. */ abstract _chainType(): string; /** * Return the list of input keys this chain expects to receive when called. */ abstract get inputKeys(): string[]; /** * Return the list of output keys this chain will produce when called. */ abstract get outputKeys(): string[];} API Reference:CallbackManagerForChainRun from langchain/callbacksBaseChain from langchain/chainsBaseMemory from langchain/memoryChainValues from langchain/schema This idea of composing components together in a chain is simple but powerful. It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications. For more specifics check out:
4e9727215e95-2301
For more specifics check out: Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components. The LLMChain is most basic building block chain. It takes in a prompt template, formats it with the user input and returns the response from an LLM. To use the LLMChain, first create a prompt template. import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? "); We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM. const chain = new LLMChain({ llm: model, prompt });// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response. For LLMChains, this defaults to "text".const res = await chain.run("colorful socks");console.log({ res });// { res: "\n\nSocktastic!" } If there are multiple variables, you can input them all at once using a dictionary. This will return the complete chain response.
4e9727215e95-2302
This will return the complete chain response. const prompt = PromptTemplate.fromTemplate( "What is a good name for {company} that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });const res = await chain.call({ company: "a startup", product: "colorful socks"});console.log({ res });// { res: { text: '\n\Socktopia Colourful Creations.' } } You can use a chat model in an LLMChain as well: import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation." } } API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openai Why do we need chains?Get started
4e9727215e95-2303
Why do we need chains?Get started Page Title: How to | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toHow to📄️ Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄️ Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.PreviousChainsNextDebugging chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toHow to📄️ Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄️ Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.PreviousChainsNextDebugging chains Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
4e9727215e95-2304
ModulesChainsHow toHow to📄️ Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄️ Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.PreviousChainsNextDebugging chains How to📄️ Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄️ Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful. It can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful. Debugging chains Page Title: Debugging chains | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2305
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to true will print out some internal states of the Chain object while running it.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const chat = new ChatOpenAI({});// This chain automatically initializes and uses a `BufferMemory` instance// as well as a default prompt.const chain = new ConversationChain({ llm: chat, verbose: true });const res = await chain.call({ input: "What is ChatGPT?" });console.log({ res });/*[chain/start] [1:chain:ConversationChain] Entering Chain run with input: { "input": "What is ChatGPT?
4e9727215e95-2306
", "history": ""}[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nHuman: What is ChatGPT?\nAI:", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:ConversationChain > 2:llm:ChatOpenAI] [3.54s] Exiting LLM run with output: { "generations": [ [ { "text": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.
4e9727215e95-2307
", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 100, "promptTokens": 69, "totalTokens": 169 } }}[chain/end] [1:chain:ConversationChain] [3.91s] Exiting Chain run with output: { "response": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses.
4e9727215e95-2308
Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. "}{ res: { response: 'ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.' }}*/You can also set this globally by setting the LANGCHAIN_VERBOSE environment variable to "true".PreviousHow toNextAdding memory (state)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2309
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to true will print out some internal states of the Chain object while running it.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const chat = new ChatOpenAI({});// This chain automatically initializes and uses a `BufferMemory` instance// as well as a default prompt.const chain = new ConversationChain({ llm: chat, verbose: true });const res = await chain.call({ input: "What is ChatGPT?" });console.log({ res });/*[chain/start] [1:chain:ConversationChain] Entering Chain run with input: { "input": "What is ChatGPT? ", "history": ""}[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "The following is a friendly conversation between a human and an AI.
4e9727215e95-2310
The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nHuman: What is ChatGPT?\nAI:", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:ConversationChain > 2:llm:ChatOpenAI] [3.54s] Exiting LLM run with output: { "generations": [ [ { "text": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics.
4e9727215e95-2311
ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 100, "promptTokens": 69, "totalTokens": 169 } }}[chain/end] [1:chain:ConversationChain] [3.91s] Exiting Chain run with output: { "response": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. "}{ res: { response: 'ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.' }}*/You can also set this globally by setting the LANGCHAIN_VERBOSE environment variable to "true".PreviousHow toNextAdding memory (state)
4e9727215e95-2312
ModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to true will print out some internal states of the Chain object while running it.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const chat = new ChatOpenAI({});// This chain automatically initializes and uses a `BufferMemory` instance// as well as a default prompt.const chain = new ConversationChain({ llm: chat, verbose: true });const res = await chain.call({ input: "What is ChatGPT?" });console.log({ res });/*[chain/start] [1:chain:ConversationChain] Entering Chain run with input: { "input": "What is ChatGPT? ", "history": ""}[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-2313
If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nHuman: What is ChatGPT?\nAI:", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:ConversationChain > 2:llm:ChatOpenAI] [3.54s] Exiting LLM run with output: { "generations": [ [ { "text": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics.
4e9727215e95-2314
ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 100, "promptTokens": 69, "totalTokens": 169 } }}[chain/end] [1:chain:ConversationChain] [3.91s] Exiting Chain run with output: { "response": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. "}{ res: { response: 'ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.' }}*/You can also set this globally by setting the LANGCHAIN_VERBOSE environment variable to "true".PreviousHow toNextAdding memory (state)
4e9727215e95-2315
Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to true will print out some internal states of the Chain object while running it.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const chat = new ChatOpenAI({});// This chain automatically initializes and uses a `BufferMemory` instance// as well as a default prompt.const chain = new ConversationChain({ llm: chat, verbose: true });const res = await chain.call({ input: "What is ChatGPT?" });console.log({ res });/*[chain/start] [1:chain:ConversationChain] Entering Chain run with input: { "input": "What is ChatGPT? ", "history": ""}[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-2316
If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nHuman: What is ChatGPT?\nAI:", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:ConversationChain > 2:llm:ChatOpenAI] [3.54s] Exiting LLM run with output: { "generations": [ [ { "text": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics.
4e9727215e95-2317
ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 100, "promptTokens": 69, "totalTokens": 169 } }}[chain/end] [1:chain:ConversationChain] [3.91s] Exiting Chain run with output: { "response": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. "}{ res: { response: 'ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.' }}*/You can also set this globally by setting the LANGCHAIN_VERBOSE environment variable to "true". Setting verbose to true will print out some internal states of the Chain object while running it.
4e9727215e95-2318
Setting verbose to true will print out some internal states of the Chain object while running it. import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";const chat = new ChatOpenAI({});// This chain automatically initializes and uses a `BufferMemory` instance// as well as a default prompt.const chain = new ConversationChain({ llm: chat, verbose: true });const res = await chain.call({ input: "What is ChatGPT?" });console.log({ res });/*[chain/start] [1:chain:ConversationChain] Entering Chain run with input: { "input": "What is ChatGPT? ", "history": ""}[llm/start] [1:chain:ConversationChain > 2:llm:ChatOpenAI] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "HumanMessage" ], "kwargs": { "content": "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nHuman: What is ChatGPT?\nAI:", "additional_kwargs": {} } } ] ]}[llm/end] [1:chain:ConversationChain > 2:llm:ChatOpenAI] [3.54s] Exiting LLM run with output: { "generations": [ [ { "text": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner.
4e9727215e95-2319
It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. ", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "AIMessage" ], "kwargs": { "content": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.
4e9727215e95-2320
", "additional_kwargs": {} } } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 100, "promptTokens": 69, "totalTokens": 169 } }}[chain/end] [1:chain:ConversationChain] [3.91s] Exiting Chain run with output: { "response": "ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations. "}{ res: { response: 'ChatGPT is a language model developed by OpenAI. It is designed to generate human-like responses in a conversational manner. It is trained on a large amount of text data from the internet and is capable of understanding and generating text across a wide range of topics. ChatGPT uses deep learning techniques, specifically a method called the transformer architecture, to process and generate high-quality text responses. Its purpose is to assist users in various conversational tasks, provide information, and engage in interactive conversations.' }}*/ You can also set this globally by setting the LANGCHAIN_VERBOSE environment variable to "true". Adding memory (state) Page Title: Adding memory (state) | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2321
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet.Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section.PreviousDebugging chainsNextFoundationalGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2322
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toDebugging chainsAdding memory (state)FoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet.Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section.PreviousDebugging chainsNextFoundationalGet started
4e9727215e95-2323
ModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet.Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section.PreviousDebugging chainsNextFoundationalGet started
4e9727215e95-2324
ModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet.Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section.PreviousDebugging chainsNextFoundational
4e9727215e95-2325
Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet.Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section.
4e9727215e95-2326
import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationChain } from "langchain/chains";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({});const memory = new BufferMemory();// This particular chain automatically initializes a BufferMemory instance if none is provided,// but we pass it explicitly here. It also has a default prompt.const chain = new ConversationChain({ llm: chat, memory });const res1 = await chain.run("Answer briefly. What are the first 3 colors of a rainbow? ");console.log(res1);// The first three colors of a rainbow are red, orange, and yellow.const res2 = await chain.run("And the next 4? ");console.log(res2);// The next four colors of a rainbow are green, blue, indigo, and violet. Essentially, BaseMemory defines an interface of how LangChain stores memory. It allows reading of stored data through loadMemoryVariables method and storing new data through saveContext method. You can learn more about it in the Memory section. Page Title: Foundational | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2327
Page Title: Foundational | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalFoundational📄️ LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄️ SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.PreviousAdding memory (state)NextLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalFoundational📄️ LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄️ SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.PreviousAdding memory (state)NextLLM Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
4e9727215e95-2328
ModulesChainsFoundationalFoundational📄️ LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄️ SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.PreviousAdding memory (state)NextLLM Foundational📄️ LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄️ SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. LLM Page Title: LLM | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2329
Page Title: LLM | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2330
");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!' } }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response. For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsUsage with Chat Models​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}."
4e9727215e95-2331
), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation." } }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiUsage in Streaming Mode​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2332
");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsCancelling a running LLMChain​We can also cancel a running LLMChain by passing an AbortSignal to the call method:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}?
4e9727215e95-2333
");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsIn this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.PreviousFoundationalNextSequentialGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2334
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!'
4e9727215e95-2335
} }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response. For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsUsage with Chat Models​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation."
4e9727215e95-2336
} }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiUsage in Streaming Mode​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2337
");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsCancelling a running LLMChain​We can also cancel a running LLMChain by passing an AbortSignal to the call method:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}?
4e9727215e95-2338
");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsIn this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.PreviousFoundationalNextSequentialGet started
4e9727215e95-2339
ModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!' } }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response.
4e9727215e95-2340
For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsUsage with Chat Models​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation."
4e9727215e95-2341
} }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiUsage in Streaming Mode​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2342
");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsCancelling a running LLMChain​We can also cancel a running LLMChain by passing an AbortSignal to the call method:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}?
4e9727215e95-2343
");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsIn this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.PreviousFoundationalNextSequentialGet started
4e9727215e95-2344
ModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!' } }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response.
4e9727215e95-2345
For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!' }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsUsage with Chat Models​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation."
4e9727215e95-2346
} }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiUsage in Streaming Mode​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2347
");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsCancelling a running LLMChain​We can also cancel a running LLMChain by passing an AbortSignal to the call method:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}?
4e9727215e95-2348
");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsIn this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.PreviousFoundationalNextSequential
4e9727215e95-2349
LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!' } }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response. For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!'
4e9727215e95-2350
}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsUsage with Chat Models​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "langchain/prompts";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "You are a helpful assistant that translates {input_language} to {output_language}." ), HumanMessagePromptTemplate.fromTemplate("{text}"),]);const chainB = new LLMChain({ prompt: chatPrompt, llm: chat,});const resB = await chainB.call({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log({ resB });// { resB: { text: "J'adore la programmation."
4e9727215e95-2351
} }API Reference:ChatPromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiUsage in Streaming Mode​We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}?
4e9727215e95-2352
");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } }API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsCancelling a running LLMChain​We can also cancel a running LLMChain by passing an AbortSignal to the call method:import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}?
4e9727215e95-2353
");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled}API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chainsIn this example we show cancellation in streaming mode, but it works the same way in non-streaming mode. An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:
4e9727215e95-2354
import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// We can construct an LLMChain from a PromptTemplate and an LLM.const model = new OpenAI({ temperature: 0 });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chainA = new LLMChain({ llm: model, prompt });// The result is an object with a `text` property.const resA = await chainA.call({ product: "colorful socks" });console.log({ resA });// { resA: { text: '\n\nSocktastic!' } }// Since this LLMChain is a single-input, single-output chain, we can also `run` it.// This convenience method takes in a string and returns the value// of the output key field in the chain response. For LLMChains, this defaults to "text".const resA2 = await chainA.run("colorful socks");console.log({ resA2 });// { resA2: '\n\nSocktastic!' } API Reference:OpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsLLMChain from langchain/chains We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel: We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:
4e9727215e95-2355
import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm: model, prompt });// Call the chain with the inputs and a callback for the streamed tokensconst res = await chain.call({ product: "colorful socks" }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, },]);console.log({ res });// { res: { text: '\n\nKaleidoscope Socks' } } We can also cancel a running LLMChain by passing an AbortSignal to the call method:
4e9727215e95-2356
We can also cancel a running LLMChain by passing an AbortSignal to the call method: import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";import { LLMChain } from "langchain/chains";// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.const model = new OpenAI({ temperature: 0.9, streaming: true });const prompt = PromptTemplate.fromTemplate( "Give me a long paragraph about {product}? ");const chain = new LLMChain({ llm: model, prompt });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 3000);try { // Call the chain with the inputs and a callback for the streamed tokens const res = await chain.call( { product: "colorful socks", signal: controller.signal }, [ { handleLLMNewToken(token: string) { process.stdout.write(token); }, }, ] );} catch (e) { console.log(e); // Error: Cancel: canceled} In this example we show cancellation in streaming mode, but it works the same way in non-streaming mode. Sequential Page Title: Sequential | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2357
Page Title: Sequential | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalSequentialSequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.SimpleSequentialChain​Let's start with the simplest possible case which is SimpleSequentialChain.An SimpleSequentialChain is a chain that allows you to join multiple single-input/single-output chains into one chain.The example below shows a sample usecase. In the first step, given a title, a synopsis of a play is generated.
4e9727215e95-2358
In the second step, based on the generated synopsis, a review of the play is generated.import { SimpleSequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2359
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate,});const overallChain = new SimpleSequentialChain({ chains: [synopsisChain, reviewChain], verbose: true,});const review = await overallChain.run("Tragedy at sunset on the beach");console.log(review);/* variable review contains the generated play review based on the input title and synopsis generated in the first step: "Tragedy at Sunset on the Beach is a powerful and moving story of love, loss, and redemption. The play follows the story of two young lovers, Jack and Jill, whose plans for a future together are tragically cut short when Jack is killed in a car accident. The play follows Jill as she struggles to cope with her grief and eventually finds solace in the arms of another man. The play is beautifully written and the performances are outstanding. The actors bring the characters to life with their heartfelt performances, and the audience is taken on an emotional journey as Jill is forced to confront her grief and make a difficult decision between her past and her future. The play culminates in a powerful climax that will leave the audience in tears. Overall, Tragedy at Sunset on the Beach is a powerful and moving story that will stay with you long after the curtain falls.
4e9727215e95-2360
It is a must-see for anyone looking for an emotionally charged and thought-provoking experience. "*/API Reference:SimpleSequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsSequentialChain​More advanced scenario useful when you have multiple chains that have more than one input or ouput keys.Unlike for SimpleSequentialChain, outputs from all previous chains will be available to the next chain.import { SequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play and the era it is set in.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title", "era"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate, outputKey: "synopsis",});// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2361
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate, outputKey: "review",});const overallChain = new SequentialChain({ chains: [synopsisChain, reviewChain], inputVariables: ["era", "title"], // Here we return multiple variables outputVariables: ["synopsis", "review"], verbose: true,});const chainExecutionResult = await overallChain.call({ title: "Tragedy at sunset on the beach", era: "Victorian England",});console.log(chainExecutionResult);/* variable chainExecutionResult contains final review and intermediate synopsis (as specified by outputVariables). The data is generated based on the input title and era: "{ "review": " Tragedy at Sunset on the Beach is a captivating and heartbreaking story of love and loss. Set in Victorian England, the play follows Emily, a young woman struggling to make ends meet in a small coastal town. Emily's dreams of a better life are dashed when she discovers her employer's scandalous affair, and her plans are further thwarted when she meets a handsome stranger on the beach. The play is a powerful exploration of the human condition, as Emily must grapple with the truth and make a difficult decision that will change her life forever.
4e9727215e95-2362
The performances are outstanding, with the actors bringing a depth of emotion to their characters that is both heartbreaking and inspiring. Overall, Tragedy at Sunset on the Beach is a beautiful and moving play that will leave audiences in tears. It is a must-see for anyone looking for a powerful and thought-provoking story. ", "synopsis": " Tragedy at Sunset on the Beach is a play set in Victorian England. It tells the story of a young woman, Emily, who is struggling to make ends meet in a small coastal town. She works as a maid for a wealthy family, but her dreams of a better life are dashed when she discovers that her employer is involved in a scandalous affair. Emily is determined to make a better life for herself, but her plans are thwarted when she meets a handsome stranger on the beach one evening. The two quickly fall in love, but their happiness is short-lived when Emily discovers that the stranger is actually a member of the wealthy family she works for. The play follows Emily as she struggles to come to terms with the truth and make sense of her life. As the sun sets on the beach, Emily must decide whether to stay with the man she loves or to leave him and pursue her dreams. In the end, Emily must make a heartbreaking decision that will change her life forever. ", }"*/API Reference:SequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousLLMNextDocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2363
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMSequentialDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsFoundationalSequentialSequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.SimpleSequentialChain​Let's start with the simplest possible case which is SimpleSequentialChain.An SimpleSequentialChain is a chain that allows you to join multiple single-input/single-output chains into one chain.The example below shows a sample usecase. In the first step, given a title, a synopsis of a play is generated.
4e9727215e95-2364
In the second step, based on the generated synopsis, a review of the play is generated.import { SimpleSequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2365
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate,});const overallChain = new SimpleSequentialChain({ chains: [synopsisChain, reviewChain], verbose: true,});const review = await overallChain.run("Tragedy at sunset on the beach");console.log(review);/* variable review contains the generated play review based on the input title and synopsis generated in the first step: "Tragedy at Sunset on the Beach is a powerful and moving story of love, loss, and redemption. The play follows the story of two young lovers, Jack and Jill, whose plans for a future together are tragically cut short when Jack is killed in a car accident. The play follows Jill as she struggles to cope with her grief and eventually finds solace in the arms of another man. The play is beautifully written and the performances are outstanding. The actors bring the characters to life with their heartfelt performances, and the audience is taken on an emotional journey as Jill is forced to confront her grief and make a difficult decision between her past and her future. The play culminates in a powerful climax that will leave the audience in tears. Overall, Tragedy at Sunset on the Beach is a powerful and moving story that will stay with you long after the curtain falls.
4e9727215e95-2366
It is a must-see for anyone looking for an emotionally charged and thought-provoking experience. "*/API Reference:SimpleSequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsSequentialChain​More advanced scenario useful when you have multiple chains that have more than one input or ouput keys.Unlike for SimpleSequentialChain, outputs from all previous chains will be available to the next chain.import { SequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play and the era it is set in.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title", "era"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate, outputKey: "synopsis",});// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2367
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate, outputKey: "review",});const overallChain = new SequentialChain({ chains: [synopsisChain, reviewChain], inputVariables: ["era", "title"], // Here we return multiple variables outputVariables: ["synopsis", "review"], verbose: true,});const chainExecutionResult = await overallChain.call({ title: "Tragedy at sunset on the beach", era: "Victorian England",});console.log(chainExecutionResult);/* variable chainExecutionResult contains final review and intermediate synopsis (as specified by outputVariables). The data is generated based on the input title and era: "{ "review": " Tragedy at Sunset on the Beach is a captivating and heartbreaking story of love and loss. Set in Victorian England, the play follows Emily, a young woman struggling to make ends meet in a small coastal town. Emily's dreams of a better life are dashed when she discovers her employer's scandalous affair, and her plans are further thwarted when she meets a handsome stranger on the beach. The play is a powerful exploration of the human condition, as Emily must grapple with the truth and make a difficult decision that will change her life forever.
4e9727215e95-2368
The performances are outstanding, with the actors bringing a depth of emotion to their characters that is both heartbreaking and inspiring. Overall, Tragedy at Sunset on the Beach is a beautiful and moving play that will leave audiences in tears. It is a must-see for anyone looking for a powerful and thought-provoking story. ", "synopsis": " Tragedy at Sunset on the Beach is a play set in Victorian England. It tells the story of a young woman, Emily, who is struggling to make ends meet in a small coastal town. She works as a maid for a wealthy family, but her dreams of a better life are dashed when she discovers that her employer is involved in a scandalous affair. Emily is determined to make a better life for herself, but her plans are thwarted when she meets a handsome stranger on the beach one evening. The two quickly fall in love, but their happiness is short-lived when Emily discovers that the stranger is actually a member of the wealthy family she works for. The play follows Emily as she struggles to come to terms with the truth and make sense of her life. As the sun sets on the beach, Emily must decide whether to stay with the man she loves or to leave him and pursue her dreams. In the end, Emily must make a heartbreaking decision that will change her life forever. ", }"*/API Reference:SequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousLLMNextDocuments
4e9727215e95-2369
ModulesChainsFoundationalSequentialSequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.SimpleSequentialChain​Let's start with the simplest possible case which is SimpleSequentialChain.An SimpleSequentialChain is a chain that allows you to join multiple single-input/single-output chains into one chain.The example below shows a sample usecase. In the first step, given a title, a synopsis of a play is generated. In the second step, based on the generated synopsis, a review of the play is generated.import { SimpleSequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright.
4e9727215e95-2370
Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate,});const overallChain = new SimpleSequentialChain({ chains: [synopsisChain, reviewChain], verbose: true,});const review = await overallChain.run("Tragedy at sunset on the beach");console.log(review);/* variable review contains the generated play review based on the input title and synopsis generated in the first step: "Tragedy at Sunset on the Beach is a powerful and moving story of love, loss, and redemption. The play follows the story of two young lovers, Jack and Jill, whose plans for a future together are tragically cut short when Jack is killed in a car accident.
4e9727215e95-2371
The play follows Jill as she struggles to cope with her grief and eventually finds solace in the arms of another man. The play is beautifully written and the performances are outstanding. The actors bring the characters to life with their heartfelt performances, and the audience is taken on an emotional journey as Jill is forced to confront her grief and make a difficult decision between her past and her future. The play culminates in a powerful climax that will leave the audience in tears. Overall, Tragedy at Sunset on the Beach is a powerful and moving story that will stay with you long after the curtain falls. It is a must-see for anyone looking for an emotionally charged and thought-provoking experience. "*/API Reference:SimpleSequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsSequentialChain​More advanced scenario useful when you have multiple chains that have more than one input or ouput keys.Unlike for SimpleSequentialChain, outputs from all previous chains will be available to the next chain.import { SequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play and the era it is set in.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright.
4e9727215e95-2372
Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title", "era"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate, outputKey: "synopsis",});// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate, outputKey: "review",});const overallChain = new SequentialChain({ chains: [synopsisChain, reviewChain], inputVariables: ["era", "title"], // Here we return multiple variables outputVariables: ["synopsis", "review"], verbose: true,});const chainExecutionResult = await overallChain.call({ title: "Tragedy at sunset on the beach", era: "Victorian England",});console.log(chainExecutionResult);/* variable chainExecutionResult contains final review and intermediate synopsis (as specified by outputVariables).
4e9727215e95-2373
The data is generated based on the input title and era: "{ "review": " Tragedy at Sunset on the Beach is a captivating and heartbreaking story of love and loss. Set in Victorian England, the play follows Emily, a young woman struggling to make ends meet in a small coastal town. Emily's dreams of a better life are dashed when she discovers her employer's scandalous affair, and her plans are further thwarted when she meets a handsome stranger on the beach. The play is a powerful exploration of the human condition, as Emily must grapple with the truth and make a difficult decision that will change her life forever. The performances are outstanding, with the actors bringing a depth of emotion to their characters that is both heartbreaking and inspiring. Overall, Tragedy at Sunset on the Beach is a beautiful and moving play that will leave audiences in tears. It is a must-see for anyone looking for a powerful and thought-provoking story. ", "synopsis": " Tragedy at Sunset on the Beach is a play set in Victorian England. It tells the story of a young woman, Emily, who is struggling to make ends meet in a small coastal town. She works as a maid for a wealthy family, but her dreams of a better life are dashed when she discovers that her employer is involved in a scandalous affair. Emily is determined to make a better life for herself, but her plans are thwarted when she meets a handsome stranger on the beach one evening.
4e9727215e95-2374
The two quickly fall in love, but their happiness is short-lived when Emily discovers that the stranger is actually a member of the wealthy family she works for. The play follows Emily as she struggles to come to terms with the truth and make sense of her life. As the sun sets on the beach, Emily must decide whether to stay with the man she loves or to leave him and pursue her dreams. In the end, Emily must make a heartbreaking decision that will change her life forever. ", }"*/API Reference:SequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousLLMNextDocuments
4e9727215e95-2375
SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.SimpleSequentialChain​Let's start with the simplest possible case which is SimpleSequentialChain.An SimpleSequentialChain is a chain that allows you to join multiple single-input/single-output chains into one chain.The example below shows a sample usecase. In the first step, given a title, a synopsis of a play is generated. In the second step, based on the generated synopsis, a review of the play is generated.import { SimpleSequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright.
4e9727215e95-2376
Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate,});const overallChain = new SimpleSequentialChain({ chains: [synopsisChain, reviewChain], verbose: true,});const review = await overallChain.run("Tragedy at sunset on the beach");console.log(review);/* variable review contains the generated play review based on the input title and synopsis generated in the first step: "Tragedy at Sunset on the Beach is a powerful and moving story of love, loss, and redemption. The play follows the story of two young lovers, Jack and Jill, whose plans for a future together are tragically cut short when Jack is killed in a car accident.
4e9727215e95-2377
The play follows Jill as she struggles to cope with her grief and eventually finds solace in the arms of another man. The play is beautifully written and the performances are outstanding. The actors bring the characters to life with their heartfelt performances, and the audience is taken on an emotional journey as Jill is forced to confront her grief and make a difficult decision between her past and her future. The play culminates in a powerful climax that will leave the audience in tears. Overall, Tragedy at Sunset on the Beach is a powerful and moving story that will stay with you long after the curtain falls. It is a must-see for anyone looking for an emotionally charged and thought-provoking experience. "*/API Reference:SimpleSequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsSequentialChain​More advanced scenario useful when you have multiple chains that have more than one input or ouput keys.Unlike for SimpleSequentialChain, outputs from all previous chains will be available to the next chain.import { SequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play and the era it is set in.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright.
4e9727215e95-2378
Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title", "era"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate, outputKey: "synopsis",});// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate, outputKey: "review",});const overallChain = new SequentialChain({ chains: [synopsisChain, reviewChain], inputVariables: ["era", "title"], // Here we return multiple variables outputVariables: ["synopsis", "review"], verbose: true,});const chainExecutionResult = await overallChain.call({ title: "Tragedy at sunset on the beach", era: "Victorian England",});console.log(chainExecutionResult);/* variable chainExecutionResult contains final review and intermediate synopsis (as specified by outputVariables).
4e9727215e95-2379
The data is generated based on the input title and era: "{ "review": " Tragedy at Sunset on the Beach is a captivating and heartbreaking story of love and loss. Set in Victorian England, the play follows Emily, a young woman struggling to make ends meet in a small coastal town. Emily's dreams of a better life are dashed when she discovers her employer's scandalous affair, and her plans are further thwarted when she meets a handsome stranger on the beach. The play is a powerful exploration of the human condition, as Emily must grapple with the truth and make a difficult decision that will change her life forever. The performances are outstanding, with the actors bringing a depth of emotion to their characters that is both heartbreaking and inspiring. Overall, Tragedy at Sunset on the Beach is a beautiful and moving play that will leave audiences in tears. It is a must-see for anyone looking for a powerful and thought-provoking story. ", "synopsis": " Tragedy at Sunset on the Beach is a play set in Victorian England. It tells the story of a young woman, Emily, who is struggling to make ends meet in a small coastal town. She works as a maid for a wealthy family, but her dreams of a better life are dashed when she discovers that her employer is involved in a scandalous affair. Emily is determined to make a better life for herself, but her plans are thwarted when she meets a handsome stranger on the beach one evening.
4e9727215e95-2380
The two quickly fall in love, but their happiness is short-lived when Emily discovers that the stranger is actually a member of the wealthy family she works for. The play follows Emily as she struggles to come to terms with the truth and make sense of her life. As the sun sets on the beach, Emily must decide whether to stay with the man she loves or to leave him and pursue her dreams. In the end, Emily must make a heartbreaking decision that will change her life forever. ", }"*/API Reference:SequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/prompts In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains: Let's start with the simplest possible case which is SimpleSequentialChain. An SimpleSequentialChain is a chain that allows you to join multiple single-input/single-output chains into one chain. The example below shows a sample usecase. In the first step, given a title, a synopsis of a play is generated. In the second step, based on the generated synopsis, a review of the play is generated.
4e9727215e95-2381
import { SimpleSequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2382
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate,});const overallChain = new SimpleSequentialChain({ chains: [synopsisChain, reviewChain], verbose: true,});const review = await overallChain.run("Tragedy at sunset on the beach");console.log(review);/* variable review contains the generated play review based on the input title and synopsis generated in the first step: "Tragedy at Sunset on the Beach is a powerful and moving story of love, loss, and redemption. The play follows the story of two young lovers, Jack and Jill, whose plans for a future together are tragically cut short when Jack is killed in a car accident. The play follows Jill as she struggles to cope with her grief and eventually finds solace in the arms of another man. The play is beautifully written and the performances are outstanding. The actors bring the characters to life with their heartfelt performances, and the audience is taken on an emotional journey as Jill is forced to confront her grief and make a difficult decision between her past and her future. The play culminates in a powerful climax that will leave the audience in tears. Overall, Tragedy at Sunset on the Beach is a powerful and moving story that will stay with you long after the curtain falls. It is a must-see for anyone looking for an emotionally charged and thought-provoking experience. "*/ API Reference:SimpleSequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/prompts
4e9727215e95-2383
More advanced scenario useful when you have multiple chains that have more than one input or ouput keys. Unlike for SimpleSequentialChain, outputs from all previous chains will be available to the next chain. import { SequentialChain, LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// This is an LLMChain to write a synopsis given a title of a play and the era it is set in.const llm = new OpenAI({ temperature: 0 });const template = `You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:`;const promptTemplate = new PromptTemplate({ template, inputVariables: ["title", "era"],});const synopsisChain = new LLMChain({ llm, prompt: promptTemplate, outputKey: "synopsis",});// This is an LLMChain to write a review of a play given a synopsis.const reviewLLM = new OpenAI({ temperature: 0 });const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
4e9727215e95-2384
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:`;const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"],});const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate, outputKey: "review",});const overallChain = new SequentialChain({ chains: [synopsisChain, reviewChain], inputVariables: ["era", "title"], // Here we return multiple variables outputVariables: ["synopsis", "review"], verbose: true,});const chainExecutionResult = await overallChain.call({ title: "Tragedy at sunset on the beach", era: "Victorian England",});console.log(chainExecutionResult);/* variable chainExecutionResult contains final review and intermediate synopsis (as specified by outputVariables). The data is generated based on the input title and era: "{ "review": " Tragedy at Sunset on the Beach is a captivating and heartbreaking story of love and loss. Set in Victorian England, the play follows Emily, a young woman struggling to make ends meet in a small coastal town. Emily's dreams of a better life are dashed when she discovers her employer's scandalous affair, and her plans are further thwarted when she meets a handsome stranger on the beach. The play is a powerful exploration of the human condition, as Emily must grapple with the truth and make a difficult decision that will change her life forever.
4e9727215e95-2385
The performances are outstanding, with the actors bringing a depth of emotion to their characters that is both heartbreaking and inspiring. Overall, Tragedy at Sunset on the Beach is a beautiful and moving play that will leave audiences in tears. It is a must-see for anyone looking for a powerful and thought-provoking story. ", "synopsis": " Tragedy at Sunset on the Beach is a play set in Victorian England. It tells the story of a young woman, Emily, who is struggling to make ends meet in a small coastal town. She works as a maid for a wealthy family, but her dreams of a better life are dashed when she discovers that her employer is involved in a scandalous affair. Emily is determined to make a better life for herself, but her plans are thwarted when she meets a handsome stranger on the beach one evening. The two quickly fall in love, but their happiness is short-lived when Emily discovers that the stranger is actually a member of the wealthy family she works for. The play follows Emily as she struggles to come to terms with the truth and make sense of her life. As the sun sets on the beach, Emily must decide whether to stay with the man she loves or to leave him and pursue her dreams. In the end, Emily must make a heartbreaking decision that will change her life forever. ", }"*/ API Reference:SequentialChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/prompts Page Title: Documents | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2386
Page Title: Documents | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reducePopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsDocumentsDocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains are all loaded in a similar way:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain, loadQAMapReduceChain, loadQARefineChain} from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.'
4e9727215e95-2387
} }// This second example uses the `MapReduceChain`.// Optionally limit the number of concurrent requests to the language model.const llmB = new OpenAI({ maxConcurrency: 10 });const chainB = loadQAMapReduceChain(llmB);const resB = await chainB.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resB });// { resB: { text: ' Harrison went to Harvard.' } }📄️ StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄️ RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄️ Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousSequentialNextStuffCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2388
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reducePopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsDocumentsDocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains are all loaded in a similar way:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain, loadQAMapReduceChain, loadQARefineChain} from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }// This second example uses the `MapReduceChain`.// Optionally limit the number of concurrent requests to the language model.const llmB = new OpenAI({ maxConcurrency: 10 });const chainB = loadQAMapReduceChain(llmB);const resB = await chainB.call({ input_documents: docs, question: "Where did Harrison go to college?
4e9727215e95-2389
",});console.log({ resB });// { resB: { text: ' Harrison went to Harvard.' } }📄️ StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄️ RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄️ Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousSequentialNextStuff Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reducePopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
4e9727215e95-2390
ModulesChainsDocumentsDocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains are all loaded in a similar way:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain, loadQAMapReduceChain, loadQARefineChain} from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }// This second example uses the `MapReduceChain`.// Optionally limit the number of concurrent requests to the language model.const llmB = new OpenAI({ maxConcurrency: 10 });const chainB = loadQAMapReduceChain(llmB);const resB = await chainB.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resB });// { resB: { text: ' Harrison went to Harvard.' } }📄️ StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains.
4e9727215e95-2391
It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄️ RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄️ Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousSequentialNextStuff
4e9727215e95-2392
DocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains are all loaded in a similar way:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain, loadQAMapReduceChain, loadQARefineChain} from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }// This second example uses the `MapReduceChain`.// Optionally limit the number of concurrent requests to the language model.const llmB = new OpenAI({ maxConcurrency: 10 });const chainB = loadQAMapReduceChain(llmB);const resB = await chainB.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resB });// { resB: { text: ' Harrison went to Harvard.' } }📄️ StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains.
4e9727215e95-2393
It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄️ RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄️ Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary. These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more. These chains are all loaded in a similar way:
4e9727215e95-2394
These chains are all loaded in a similar way: import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain, loadQAMapReduceChain, loadQARefineChain} from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }// This second example uses the `MapReduceChain`.// Optionally limit the number of concurrent requests to the language model.const llmB = new OpenAI({ maxConcurrency: 10 });const chainB = loadQAMapReduceChain(llmB);const resB = await chainB.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resB });// { resB: { text: ' Harrison went to Harvard.' } } The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM. The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
4e9727215e95-2395
The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary. Stuff Page Title: Stuff | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-2396
Page Title: Stuff | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reducePopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.Here's how it looks in practice:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain } from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }API Reference:OpenAI from langchain/llms/openailoadQAStuffChain from langchain/chainsDocument from langchain/documentPreviousDocumentsNextRefineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-2397
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reducePopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.Here's how it looks in practice:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain } from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }API Reference:OpenAI from langchain/llms/openailoadQAStuffChain from langchain/chainsDocument from langchain/documentPreviousDocumentsNextRefine
4e9727215e95-2398
ModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.Here's how it looks in practice:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain } from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }API Reference:OpenAI from langchain/llms/openailoadQAStuffChain from langchain/chainsDocument from langchain/documentPreviousDocumentsNextRefine
4e9727215e95-2399
StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.Here's how it looks in practice:import { OpenAI } from "langchain/llms/openai";import { loadQAStuffChain } from "langchain/chains";import { Document } from "langchain/document";// This first example uses the `StuffDocumentsChain`.const llmA = new OpenAI({});const chainA = loadQAStuffChain(llmA);const docs = [ new Document({ pageContent: "Harrison went to Harvard." }), new Document({ pageContent: "Ankush went to Princeton." }),];const resA = await chainA.call({ input_documents: docs, question: "Where did Harrison go to college? ",});console.log({ resA });// { resA: { text: ' Harrison went to Harvard.' } }API Reference:OpenAI from langchain/llms/openailoadQAStuffChain from langchain/chainsDocument from langchain/document This chain is well-suited for applications where documents are small and only a few are passed in for most calls. Here's how it looks in practice: