id
stringlengths
14
17
text
stringlengths
42
2.1k
4e9727215e95-100
The combined modules are deprecated, do not work outside of Node.js, and will be removed in a future version.If you were using langchain/llms, see LLMs for updated import paths.If you were using langchain/chat_models, see Chat Models for updated import paths.If you were using langchain/embeddings, see Embeddings for updated import paths.If you were using langchain/vectorstores, see Vector Stores for updated import paths.If you were using langchain/document_loaders, see Document Loaders for updated import paths.If you were using langchain/retrievers, see Retrievers for updated import paths.Other modules are not affected by this change, and you can continue to import them from the same path.Additionally, there are some breaking changes that were needed to support new environments:import { Calculator } from "langchain/tools"; now moved toimport { Calculator } from "langchain/tools/calculator";import { loadLLM } from "langchain/llms"; now moved toimport { loadLLM } from "langchain/llms/load";import { loadAgent } from "langchain/agents"; now moved toimport { loadAgent } from "langchain/agents/load";import { loadPrompt } from "langchain/prompts"; now moved toimport { loadPrompt } from "langchain/prompts/load";import { loadChain } from "langchain/chains"; now moved toimport { loadChain } from "langchain/chains/load";Unsupported: Node.js 16​We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section.
4e9727215e95-101
We do not guarantee that these instructions will continue to work in the future.You will have to make fetch available globally, either:run your application with NODE_OPTIONS='--experimental-fetch' node ..., orinstall node-fetch and follow the instructions hereAdditionally you'll have to polyfill structuredClone, eg. by installing core-js and following the instructions here.If you are running this on Node.js 18+, you do not need to do anything. infoUpdating from <0.0.52? See this section for instructions. info Updating from <0.0.52? See this section for instructions. LangChain is written in TypeScript and can be used in: To get started, install LangChain with the following command: npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchain npm install -S langchainyarn add langchainpnpm add langchain npm install -S langchain yarn add langchain pnpm add langchain LangChain is written in TypeScript and provides type definitions for all of its public APIs. LangChain provides an ESM build targeting Node.js environments. You can import it using the following syntax: import { OpenAI } from "langchain/llms/openai"; If you are using TypeScript in an ESM project we suggest updating your tsconfig.json to include the following: tsconfig.json{ "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", }} tsconfig.json { "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", }} LangChain provides a CommonJS build targeting Node.js environments. You can import it using the following syntax: const { OpenAI } = require("langchain/llms/openai");
4e9727215e95-102
const { OpenAI } = require("langchain/llms/openai"); LangChain can be used in Cloudflare Workers. You can import it using the following syntax: LangChain can be used in Vercel / Next.js. We support using LangChain in frontend components, in Serverless functions and in Edge functions. You can import it using the following syntax: LangChain can be used in Deno / Supabase Edge Functions. You can import it using the following syntax: import { OpenAI } from "https://esm.sh/langchain/llms/openai"; We recommend looking at our Supabase Template for an example of how to use LangChain in Supabase Edge Functions. LangChain can be used in the browser. In our CI we test bundling LangChain with Webpack and Vite, but other bundlers should work too. You can import it using the following syntax: If you are updating from a version of LangChain prior to 0.0.52, you will need to update your imports to use the new path structure. For example, if you were previously doing import { OpenAI } from "langchain/llms"; you will now need to do This applies to all imports from the following 6 modules, which have been split into submodules for each integration. The combined modules are deprecated, do not work outside of Node.js, and will be removed in a future version. Other modules are not affected by this change, and you can continue to import them from the same path. Additionally, there are some breaking changes that were needed to support new environments: We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section. We do not guarantee that these instructions will continue to work in the future.
4e9727215e95-103
You will have to make fetch available globally, either: Additionally you'll have to polyfill structuredClone, eg. by installing core-js and following the instructions here. If you are running this on Node.js 18+, you do not need to do anything. Quickstart Page Title: Quickstart | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceGet startedQuickstartOn this pageQuickstartInstallation​To install LangChain run:npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchainFor more details, see our Installation guide.Environment setup​Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});Building an application​Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
4e9727215e95-104
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.LLMs​Get predictions from a language model​The basic building block of LangChain is the LLM, which takes in text and generates more text.As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,});And now we can pass in text and get predictions!const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun"Chat models​Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter.
4e9727215e95-105
Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
4e9727215e95-106
You can access this through the predict interface.const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation. "Prompt templates​Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string.
4e9727215e95-107
Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Chains​Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks?
4e9727215e95-108
");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-109
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }Agents​Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.To load an agent, you need to choose a(n):LLM/Chat model: The language model powering the agent.Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the Tools documentation.Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take.
4e9727215e95-110
Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see here. For a list of supported agents and their specifications, see here.For this example, we'll be using SerpAPI to query a search engine.You'll need to set the SERPAPI_API_KEY environment variable.LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday.
4e9727215e95-111
Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/Memory​The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions.
4e9727215e95-112
This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?"
4e9727215e95-113
});we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-114
If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?PreviousInstallationNextModulesInstallationEnvironment setupBuilding an applicationLLMsChat modelsPrompt templatesChainsAgentsMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-115
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceGet startedQuickstartOn this pageQuickstartInstallation​To install LangChain run:npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchainFor more details, see our Installation guide.Environment setup​Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});Building an application​Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
4e9727215e95-116
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.LLMs​Get predictions from a language model​The basic building block of LangChain is the LLM, which takes in text and generates more text.As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,});And now we can pass in text and get predictions!const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun"Chat models​Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter.
4e9727215e95-117
Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
4e9727215e95-118
You can access this through the predict interface.const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation. "Prompt templates​Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string.
4e9727215e95-119
Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Chains​Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks?
4e9727215e95-120
");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-121
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }Agents​Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.To load an agent, you need to choose a(n):LLM/Chat model: The language model powering the agent.Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the Tools documentation.Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take.
4e9727215e95-122
Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see here. For a list of supported agents and their specifications, see here.For this example, we'll be using SerpAPI to query a search engine.You'll need to set the SERPAPI_API_KEY environment variable.LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday.
4e9727215e95-123
Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/Memory​The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions.
4e9727215e95-124
This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?"
4e9727215e95-125
});we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-126
If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?PreviousInstallationNextModulesInstallationEnvironment setupBuilding an applicationLLMsChat modelsPrompt templatesChainsAgentsMemory
4e9727215e95-127
Get startedQuickstartOn this pageQuickstartInstallation​To install LangChain run:npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchainFor more details, see our Installation guide.Environment setup​Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});Building an application​Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.LLMs​Get predictions from a language model​The basic building block of LangChain is the LLM, which takes in text and generates more text.As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper.
4e9727215e95-128
In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,});And now we can pass in text and get predictions!const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun"Chat models​Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
4e9727215e95-129
You can access this through the predict interface.const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation. "Prompt templates​Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string.
4e9727215e95-130
Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Chains​Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks?
4e9727215e95-131
");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-132
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }Agents​Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.To load an agent, you need to choose a(n):LLM/Chat model: The language model powering the agent.Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the Tools documentation.Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take.
4e9727215e95-133
Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see here. For a list of supported agents and their specifications, see here.For this example, we'll be using SerpAPI to query a search engine.You'll need to set the SERPAPI_API_KEY environment variable.LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday.
4e9727215e95-134
Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/Memory​The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions.
4e9727215e95-135
This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?"
4e9727215e95-136
});we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-137
If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?PreviousInstallationNextModulesInstallationEnvironment setupBuilding an applicationLLMsChat modelsPrompt templatesChainsAgentsMemory
4e9727215e95-138
Get startedQuickstartOn this pageQuickstartInstallation​To install LangChain run:npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchainFor more details, see our Installation guide.Environment setup​Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});Building an application​Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.LLMs​Get predictions from a language model​The basic building block of LangChain is the LLM, which takes in text and generates more text.As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper.
4e9727215e95-139
In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,});And now we can pass in text and get predictions!const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun"Chat models​Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
4e9727215e95-140
You can access this through the predict interface.const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation. "Prompt templates​Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string.
4e9727215e95-141
Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Chains​Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks?
4e9727215e95-142
");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-143
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }Agents​Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.To load an agent, you need to choose a(n):LLM/Chat model: The language model powering the agent.Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the Tools documentation.Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take.
4e9727215e95-144
Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see here. For a list of supported agents and their specifications, see here.For this example, we'll be using SerpAPI to query a search engine.You'll need to set the SERPAPI_API_KEY environment variable.LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday.
4e9727215e95-145
Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/Memory​The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions.
4e9727215e95-146
This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?"
4e9727215e95-147
});we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-148
If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?PreviousInstallationNextModules
4e9727215e95-149
QuickstartInstallation​To install LangChain run:npmYarnpnpmnpm install -S langchainyarn add langchainpnpm add langchainFor more details, see our Installation guide.Environment setup​Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class:import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});Building an application​Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.LLMs​Get predictions from a language model​The basic building block of LangChain is the LLM, which takes in text and generates more text.As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper.
4e9727215e95-150
In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,});And now we can pass in text and get predictions!const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun"Chat models​Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
4e9727215e95-151
You can access this through the predict interface.const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation. "Prompt templates​Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string.
4e9727215e95-152
Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/Chains​Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks?
4e9727215e95-153
");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-154
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }Agents​Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.To load an agent, you need to choose a(n):LLM/Chat model: The language model powering the agent.Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the Tools documentation.Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take.
4e9727215e95-155
Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see here. For a list of supported agents and their specifications, see here.For this example, we'll be using SerpAPI to query a search engine.You'll need to set the SERPAPI_API_KEY environment variable.LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday.
4e9727215e95-156
Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/Memory​The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions.
4e9727215e95-157
This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?"
4e9727215e95-158
});we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
4e9727215e95-159
If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim? To install LangChain run: For more details, see our Installation guide. Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs. Accessing their API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running: export OPENAI_API_KEY="..." If you'd prefer not to set an environment variable you can pass the key in directly via the openAIApiKey parameter when initializing the OpenAI LLM class: import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ openAIApiKey: "YOUR_KEY_HERE",});
4e9727215e95-160
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases. The basic building block of LangChain is the LLM, which takes in text and generates more text. As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature. import { OpenAI } from "langchain/llms/openai";const llm = new OpenAI({ temperature: 0.9,}); And now we can pass in text and get predictions! const result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");// "Feetful of Fun" Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs. You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, FunctionMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.
4e9727215e95-161
import { ChatOpenAI } from "langchain/chat_models/openai";import { HumanMessage, ChatMessage, SystemMessage } from "langchain/schema";const chat = new ChatOpenAI({ temperature: 0});const result = await chat.predictMessages([ new HumanMessage("Translate this sentence from English to French. I love programming. ")]);/* AIMessage { content: "J'adore la programmation." }*/ It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM. You can access this through the predict interface. const result = await chat.predict("Translate this sentence from English to French. I love programming. ")// "J'adore la programmation." Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand. In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
4e9727215e95-162
LLMsChat modelsWith PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/
4e9727215e95-163
With PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks? "Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/
4e9727215e95-164
With PromptTemplates this is easy! In this case our template would be very simple:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"});"What is a good name for a company that makes colorful socks?" With PromptTemplates this is easy! In this case our template would be very simple: import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const formattedPrompt = await prompt.format({ product: "colorful socks"}); "What is a good name for a company that makes colorful socks?"
4e9727215e95-165
"What is a good name for a company that makes colorful socks?" Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages.Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "});/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/ Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_messages method to generate the formatted messages. Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.
4e9727215e95-166
import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const formattedPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. "}); /* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.' } ]*/ Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.
4e9727215e95-167
LLMsChat modelsThe simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-168
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" }
4e9727215e95-169
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-170
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" } The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.Using this we can replaceconst result = await llm.predict("What would be a good company name for a company that makes colorful socks? ");withimport { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks");"Feetful of Fun"There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.
4e9727215e95-171
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template. Using this we can replace const result = await llm.predict("What would be a good company name for a company that makes colorful socks? "); with import { OpenAI } from "langchain/llms/openai";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";const llm = new OpenAI({});const prompt = PromptTemplate.fromTemplate("What is a good name for a company that makes {product}? ");const chain = new LLMChain({ llm, prompt});// Run is a convenience method for chains with prompts that require one input and one output.const result = await chain.run("colorful socks"); "Feetful of Fun" There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.
4e9727215e95-172
The LLMChain can be used with chat models as well:import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",});// { text: "J'adore programmer" } The LLMChain can be used with chat models as well:
4e9727215e95-173
The LLMChain can be used with chat models as well: import { ChatOpenAI } from "langchain/chat_models/openai";import { LLMChain } from "langchain/chains";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate} from "langchain/prompts";const template = "You are a helpful assistant that translates {input_language} to {output_language}. ";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);const chatPrompt = ChatPromptTemplate.fromPromptMessages([systemMessagePrompt, humanMessagePrompt]);const chat = new ChatOpenAI({ temperature: 0,});const chain = new LLMChain({ llm: chat, prompt: chatPrompt,});const result = await chain.call({ input_language: "English", output_language: "French", text: "I love programming",}); // { text: "J'adore programmer" } Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs. Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer. To load an agent, you need to choose a(n): For this example, we'll be using SerpAPI to query a search engine. You'll need to set the SERPAPI_API_KEY environment variable.
4e9727215e95-174
You'll need to set the SERPAPI_API_KEY environment variable. LLMsChat modelsimport { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models.
4e9727215e95-175
There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/
4e9727215e95-176
import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }Agents can also be used with chat models.
4e9727215e95-177
There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/
4e9727215e95-178
import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,});> Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain.// { output: "1.0974509573251117" }
4e9727215e95-179
import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = "What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power? ";const result = await executor.call({ input,}); > Entering new AgentExecutor chain...Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.Action: SearchAction Input: "High temperature in SF yesterday"Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.Action: CalculatorAction Input: 57^.023Observation: Answer: 1.0974509573251117Thought: I now know the final answerFinal Answer: 1.0974509573251117.> Finished chain. // { output: "1.0974509573251117" }
4e9727215e95-180
// { output: "1.0974509573251117" } Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/ Agents can also be used with chat models. There are a few varieties, but if using OpenAI and a functions-capable model, you can use openai-functions as the agent type. import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "langchain/chat_models/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const executor = await initializeAgentExecutorWithOptions( [new Calculator(), new SerpAPI()], new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 }), { agentType: "openai-functions", verbose: true, });const result = await executor.run("What is the temperature in New York? ");
4e9727215e95-181
/* { "output": "The current temperature in New York is 89°F, but it feels like 92°F. Please be cautious as the heat can lead to dehydration or heat stroke." }*/ The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions. This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages. The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state. There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
4e9727215e95-182
LLMsChat modelsimport { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?" });we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim.
4e9727215e95-183
"You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?
4e9727215e95-184
import { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?" });we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim. "You can use Memory with chains and agents initialized with chat models.
4e9727215e95-185
The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?
4e9727215e95-186
import { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." });here's what's going on under the hood> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today? 'Now if we run the chain againconst res2 = await chain.call({ input: "What's my name?" });we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input> Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim."
4e9727215e95-187
import { OpenAI } from "langchain/llms/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory, verbose: true,});const res1 = await chain.call({ input: "Hi! I'm Jim." }); here's what's going on under the hood > Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI:> Finished chain.>> 'Hello! How are you today?' Now if we run the chain again const res2 = await chain.call({ input: "What's my name?" }); we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input > Entering new chain...Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!AI: Hello! How are you today?Human: I'm doing well! Just having a conversation with an AI.AI:> Finished chain.>> "Your name is Jim."
4e9727215e95-188
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",});Hello Jim! It's nice to meet you. How can I assist you today?const res2 = await chain.call({ input: "What is my name? ",});Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim?
4e9727215e95-189
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object. import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "langchain/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromPromptMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);// Return the current conversation directly as messages and insert them into the MessagesPlaceholder in the above prompt.const memory = new BufferMemory({ returnMessages: true, memoryKey: "history"});const chain = new ConversationChain({ memory, prompt: chatPrompt, llm: chat, verbose: true,});const res = await chain.call({ input: "My name is Jim. ",}); Hello Jim! It's nice to meet you. How can I assist you today? const res2 = await chain.call({ input: "What is my name? ",}); Your name is Jim. You mentioned it at the beginning of our conversation. Is there anything specific you would like to know or discuss, Jim? InstallationEnvironment setupBuilding an applicationLLMsChat modelsPrompt templatesChainsAgentsMemory
4e9727215e95-190
InstallationEnvironment setupBuilding an applicationLLMsChat modelsPrompt templatesChainsAgentsMemory Page Title: Modules | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesOn this pageModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chainPreviousQuickstartNextModel I/OCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesOn this pageModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chainPreviousQuickstartNextModel I/O
4e9727215e95-191
ModulesOn this pageModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chainPreviousQuickstartNextModel I/O ModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chain Model I/O Page Title: Model I/O | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsPreviousModulesNextPromptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-192
Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsPreviousModulesNextPrompts Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference Prompts Language models Output parsers ModulesModel I/​OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsPreviousModulesNextPrompts Model I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputs The core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model. Page Title: Prompts | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-193
Page Title: Prompts | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPromptsThe new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in promptsPreviousModel I/ONextPrompt templatesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPromptsThe new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in promptsPreviousModel I/ONextPrompt templates Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference Prompt templates Example selectors ModulesModel I/​OPromptsPromptsThe new way of programming models is through prompts.
4e9727215e95-194
A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in promptsPreviousModel I/ONextPrompt templates PromptsThe new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in prompts The new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy. Page Title: Prompt templates | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
4e9727215e95-195
LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's a simple example:import { PromptTemplate } from "langchain/prompts";const prompt = PromptTemplate.fromTemplate<{product: string}>( `You are a naming consultant for new companies.What is a good name for a company that makes {product}?`);const formattedPrompt = await prompt.format({ product: "colorful socks",});/* You are a naming consultant for new companies. What is a good name for a company that makes colorful socks? */Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.import { PromptTemplate } from "langchain/prompts";// An example prompt with no input variablesconst noInputPrompt = new PromptTemplate({ inputVariables: [], template: "Tell me a joke. ",});const formattedNoInputPrompt = await noInputPrompt.format();console.log(formattedNoInputPrompt);// "Tell me a joke.
4e9727215e95-196
"// An example prompt with one input variableconst oneInputPrompt = new PromptTemplate({ inputVariables: ["adjective"], template: "Tell me a {adjective} joke. "})const formattedOneInputPrompt = await oneInputPrompt.format({ adjective: "funny",});console.log(formattedOneInputPrompt);// "Tell me a funny joke. "// An example prompt with multiple input variablesconst multipleInputPrompt = new PromptTemplate({ inputVariables: ["adjective", "content"], template: "Tell me a {adjective} joke about {content}. ",});const formattedMultipleInputPrompt = await multipleInputPrompt.format({ adjective: "funny", content: "chickens",});console.log(formattedMultipleInputPrompt);// "Tell me a funny joke about chickens. "If you do not wish to specify inputVariables manually, you can also create a PromptTemplate using the fromTemplate class method. LangChain will automatically infer the inputVariables based on the template passed.import { PromptTemplate } from "langchain/prompts";const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate(template);console.log(promptTemplate.inputVariables);// ['adjective', 'content']const formattedPromptTemplate = await promptTemplate.format({ adjective: "funny", content: "chickens",});console.log(formattedPromptTemplate);// "Tell me a funny joke about chickens. "Note: If you're using TypeScript, keep in mind if you use .fromTemplate in this way, the compiler will not be able to automatically infer what inputs
4e9727215e95-197
are required. To get around this, you can manually specify a type parameter like this:const template = "Tell me a {adjective} joke about {content}. ";const promptTemplate = PromptTemplate.fromTemplate<{ adjective: string, content: string }>(template);You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt. These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with an AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully explore the potential of underlying chat model.import { ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { AIMessage, HumanMessage, SystemMessage,} from "langchain/schema";To create a message template associated with a role, you use the corresponding <ROLE>MessagePromptTemplate.For convenience, there is a fromTemplate method exposed on these classes. If you were to use this template, this is what it would look like:const template = "You are a helpful assistant that translates {input_language} to {output_language}.
4e9727215e95-198
";const systemMessagePrompt = SystemMessagePromptTemplate.fromTemplate(template);const humanTemplate = "{text}";const humanMessagePrompt = HumanMessagePromptTemplate.fromTemplate(humanTemplate);If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate externally and then pass it in, e.g. :const prompt = new PromptTemplate({ template: "You are a helpful assistant that translates {input_language} to {output_language}. ", inputVariables: ["input_language", "output_language"],});const systemMessagePrompt2 = new SystemMessagePromptTemplate({ prompt,});After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's formatPrompt method -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.const chatPrompt = ChatPromptTemplate.fromPromptMessages([ systemMessagePrompt, humanMessagePrompt]);// Format the messagesconst formattedChatPrompt = await chatPrompt.formatMessages({ input_language: "English", output_language: "French", text: "I love programming. ",});console.log(formattedChatPrompt);/* [ SystemMessage { content: 'You are a helpful assistant that translates English to French.' }, HumanMessage { content: 'I love programming.'
4e9727215e95-199
} ]*/Note: Similarly to the PromptTemplate example, if using TypeScript, you can add typing to prompts created with .fromPromptMessages by passing a type parameter like this:const chatPrompt = ChatPromptTemplate.fromPromptMessages<{ input_language: string, output_language: string, text: string}>([ systemMessagePrompt, humanMessagePrompt]);PreviousPromptsNextPartial prompt templatesWhat is a prompt template?CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Get startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesPartial prompt templatesCompositionExample selectorsPrompt selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.