id
stringlengths
14
17
text
stringlengths
42
2.1k
4e9727215e95-3100
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toSubscribing to eventsSubscribing to eventsYou can subscribe to a number of events that are emitted by the Agent and the underlying tools, chains and models via callbacks.For more info on the events available see the Callbacks section of the docs.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3101
What is his current age raised to the 0.23 power?`;const result = await executor.run(input, [ { handleAgentAction(action, runId) { console.log("\nhandleAgentAction", action, runId); }, handleAgentEnd(action, runId) { console.log("\nhandleAgentEnd", action, runId); }, handleToolEnd(output, runId) { console.log("\nhandleToolEnd", output, runId); }, },]);/*handleAgentAction { tool: 'search', toolInput: 'Olivia Wilde boyfriend', log: " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n" + 'Action: search\n' + 'Action Input: "Olivia Wilde boyfriend"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.
4e9727215e95-3102
062fef47-8ad1-4729-9949-a57be252e002handleAgentAction { tool: 'search', toolInput: 'Harry Styles age', log: " I need to find out Harry Styles' age.\n" + 'Action: search\n' + 'Action Input: "Harry Styles age"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 29 years 9ec91e41-2fbf-4de0-85b6-12b3e6b3784e 61d77e10-c119-435d-a985-1f9d45f0ef08handleAgentAction { tool: 'calculator', toolInput: '29^0.23', log: ' I need to calculate 29 raised to the 0.23 power.\n' + 'Action: calculator\n' + 'Action Input: 29^0.23'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 2.169459462491557 07aec96a-ce19-4425-b863-2eae39db8199handleAgentEnd { returnValues: { output: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }, log: ' I now know the final answer.\n' + "Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."}
4e9727215e95-3103
age raised to the 0.23 power is 2.169459462491557."} 9b978461-1f6f-4d5f-80cf-5b229ce181b6*/console.log({ result });// { result: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorPreviousStructured tool chatNextCancelling requests
4e9727215e95-3104
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference ModulesAgentsHow-toSubscribing to eventsSubscribing to eventsYou can subscribe to a number of events that are emitted by the Agent and the underlying tools, chains and models via callbacks.For more info on the events available see the Callbacks section of the docs.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3105
What is his current age raised to the 0.23 power?`;const result = await executor.run(input, [ { handleAgentAction(action, runId) { console.log("\nhandleAgentAction", action, runId); }, handleAgentEnd(action, runId) { console.log("\nhandleAgentEnd", action, runId); }, handleToolEnd(output, runId) { console.log("\nhandleToolEnd", output, runId); }, },]);/*handleAgentAction { tool: 'search', toolInput: 'Olivia Wilde boyfriend', log: " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n" + 'Action: search\n' + 'Action Input: "Olivia Wilde boyfriend"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.
4e9727215e95-3106
062fef47-8ad1-4729-9949-a57be252e002handleAgentAction { tool: 'search', toolInput: 'Harry Styles age', log: " I need to find out Harry Styles' age.\n" + 'Action: search\n' + 'Action Input: "Harry Styles age"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 29 years 9ec91e41-2fbf-4de0-85b6-12b3e6b3784e 61d77e10-c119-435d-a985-1f9d45f0ef08handleAgentAction { tool: 'calculator', toolInput: '29^0.23', log: ' I need to calculate 29 raised to the 0.23 power.\n' + 'Action: calculator\n' + 'Action Input: 29^0.23'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 2.169459462491557 07aec96a-ce19-4425-b863-2eae39db8199handleAgentEnd { returnValues: { output: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }, log: ' I now know the final answer.\n' + "Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."}
4e9727215e95-3107
age raised to the 0.23 power is 2.169459462491557."} 9b978461-1f6f-4d5f-80cf-5b229ce181b6*/console.log({ result });// { result: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorPreviousStructured tool chatNextCancelling requests
4e9727215e95-3108
Subscribing to eventsYou can subscribe to a number of events that are emitted by the Agent and the underlying tools, chains and models via callbacks.For more info on the events available see the Callbacks section of the docs.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3109
What is his current age raised to the 0.23 power?`;const result = await executor.run(input, [ { handleAgentAction(action, runId) { console.log("\nhandleAgentAction", action, runId); }, handleAgentEnd(action, runId) { console.log("\nhandleAgentEnd", action, runId); }, handleToolEnd(output, runId) { console.log("\nhandleToolEnd", output, runId); }, },]);/*handleAgentAction { tool: 'search', toolInput: 'Olivia Wilde boyfriend', log: " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n" + 'Action: search\n' + 'Action Input: "Olivia Wilde boyfriend"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.
4e9727215e95-3110
062fef47-8ad1-4729-9949-a57be252e002handleAgentAction { tool: 'search', toolInput: 'Harry Styles age', log: " I need to find out Harry Styles' age.\n" + 'Action: search\n' + 'Action Input: "Harry Styles age"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 29 years 9ec91e41-2fbf-4de0-85b6-12b3e6b3784e 61d77e10-c119-435d-a985-1f9d45f0ef08handleAgentAction { tool: 'calculator', toolInput: '29^0.23', log: ' I need to calculate 29 raised to the 0.23 power.\n' + 'Action: calculator\n' + 'Action Input: 29^0.23'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 2.169459462491557 07aec96a-ce19-4425-b863-2eae39db8199handleAgentEnd { returnValues: { output: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }, log: ' I now know the final answer.\n' + "Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."}
4e9727215e95-3111
age raised to the 0.23 power is 2.169459462491557."} 9b978461-1f6f-4d5f-80cf-5b229ce181b6*/console.log({ result });// { result: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculator
4e9727215e95-3112
You can subscribe to a number of events that are emitted by the Agent and the underlying tools, chains and models via callbacks.
4e9727215e95-3113
import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`;const result = await executor.run(input, [ { handleAgentAction(action, runId) { console.log("\nhandleAgentAction", action, runId); }, handleAgentEnd(action, runId) { console.log("\nhandleAgentEnd", action, runId); }, handleToolEnd(output, runId) { console.log("\nhandleToolEnd", output, runId); }, },]);/*handleAgentAction { tool: 'search', toolInput: 'Olivia Wilde boyfriend', log: " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n" + 'Action: search\n' + 'Action Input: "Olivia Wilde boyfriend"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.
4e9727215e95-3114
062fef47-8ad1-4729-9949-a57be252e002handleAgentAction { tool: 'search', toolInput: 'Harry Styles age', log: " I need to find out Harry Styles' age.\n" + 'Action: search\n' + 'Action Input: "Harry Styles age"'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 29 years 9ec91e41-2fbf-4de0-85b6-12b3e6b3784e 61d77e10-c119-435d-a985-1f9d45f0ef08handleAgentAction { tool: 'calculator', toolInput: '29^0.23', log: ' I need to calculate 29 raised to the 0.23 power.\n' + 'Action: calculator\n' + 'Action Input: 29^0.23'} 9b978461-1f6f-4d5f-80cf-5b229ce181b6handleToolEnd 2.169459462491557 07aec96a-ce19-4425-b863-2eae39db8199handleAgentEnd { returnValues: { output: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }, log: ' I now know the final answer.\n' + "Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."}
4e9727215e95-3115
age raised to the 0.23 power is 2.169459462491557."} 9b978461-1f6f-4d5f-80cf-5b229ce181b6*/console.log({ result });// { result: "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557." }
4e9727215e95-3116
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you run the agent. For example:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 2000);try { const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3117
What is his current age raised to the 0.23 power?`; const result = await executor.call({ input, signal: controller.signal });} catch (e) { console.log(e); /* Error: Cancel: canceled at file:///Users/nuno/dev/langchainjs/langchain/dist/util/async_caller.js:60:23 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at RetryOperation._fn (/Users/nuno/dev/langchainjs/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } */}API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousSubscribing to eventsNextCustom LLM AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-3118
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you run the agent. For example:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 2000);try { const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3119
What is his current age raised to the 0.23 power?`; const result = await executor.call({ input, signal: controller.signal });} catch (e) { console.log(e); /* Error: Cancel: canceled at file:///Users/nuno/dev/langchainjs/langchain/dist/util/async_caller.js:60:23 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at RetryOperation._fn (/Users/nuno/dev/langchainjs/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } */}API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousSubscribing to eventsNextCustom LLM Agent
4e9727215e95-3120
ModulesAgentsHow-toCancelling requestsCancelling requestsYou can cancel a request by passing a signal option when you run the agent. For example:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 2000);try { const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3121
What is his current age raised to the 0.23 power?`; const result = await executor.call({ input, signal: controller.signal });} catch (e) { console.log(e); /* Error: Cancel: canceled at file:///Users/nuno/dev/langchainjs/langchain/dist/util/async_caller.js:60:23 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at RetryOperation._fn (/Users/nuno/dev/langchainjs/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } */}API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.PreviousSubscribing to eventsNextCustom LLM Agent
4e9727215e95-3122
Cancelling requestsYou can cancel a request by passing a signal option when you run the agent. For example:import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 2000);try { const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3123
What is his current age raised to the 0.23 power?`; const result = await executor.call({ input, signal: controller.signal });} catch (e) { console.log(e); /* Error: Cancel: canceled at file:///Users/nuno/dev/langchainjs/langchain/dist/util/async_caller.js:60:23 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at RetryOperation._fn (/Users/nuno/dev/langchainjs/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } */}API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculatorNote, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response. You can cancel a request by passing a signal option when you run the agent. For example:
4e9727215e95-3124
You can cancel a request by passing a signal option when you run the agent. For example: import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description",});const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.setTimeout(() => { controller.abort();}, 2000);try { const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`; const result = await executor.call({ input, signal: controller.signal });} catch (e) { console.log(e); /* Error: Cancel: canceled at file:///Users/nuno/dev/langchainjs/langchain/dist/util/async_caller.js:60:23 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at RetryOperation._fn (/Users/nuno/dev/langchainjs/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } */} Custom LLM Agent Page Title: Custom LLM Agent | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-3125
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCustom LLM AgentCustom LLM AgentThis notebook goes through how to create your own custom LLM agent.An LLM agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doLLM: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool.
4e9727215e95-3126
log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.import { LLMSingleActionAgent, AgentActionOutputParser, AgentExecutor,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { BasePromptTemplate, BaseStringPromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { InputValues, PartialValues, AgentStep, AgentAction, AgentFinish,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseStringPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string {
4e9727215e95-3127
throw new Error("Not implemented"); } format(input: InputValues): Promise<string> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = input.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...input }; /** Format the template. / return Promise.resolve(renderTemplate(template, "f-string", newInput)); } partial(_values: PartialValues): Promise<BaseStringPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3128
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3129
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};API Reference:LLMSingleActionAgent from langchain/agentsAgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiBasePromptTemplate from langchain/promptsBaseStringPromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsInputValues from langchain/schemaPartialValues from langchain/schemaAgentStep from langchain/schemaAgentAction from langchain/schemaAgentFinish from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCancelling requestsNextCustom LLM Agent (with a ChatModel)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-3130
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCustom LLM AgentCustom LLM AgentThis notebook goes through how to create your own custom LLM agent.An LLM agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doLLM: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool.
4e9727215e95-3131
log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.import { LLMSingleActionAgent, AgentActionOutputParser, AgentExecutor,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { BasePromptTemplate, BaseStringPromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { InputValues, PartialValues, AgentStep, AgentAction, AgentFinish,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseStringPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string {
4e9727215e95-3132
throw new Error("Not implemented"); } format(input: InputValues): Promise<string> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = input.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...input }; /** Format the template. / return Promise.resolve(renderTemplate(template, "f-string", newInput)); } partial(_values: PartialValues): Promise<BaseStringPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3133
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3134
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};API Reference:LLMSingleActionAgent from langchain/agentsAgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiBasePromptTemplate from langchain/promptsBaseStringPromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsInputValues from langchain/schemaPartialValues from langchain/schemaAgentStep from langchain/schemaAgentAction from langchain/schemaAgentFinish from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCancelling requestsNextCustom LLM Agent (with a ChatModel)
4e9727215e95-3135
ModulesAgentsHow-toCustom LLM AgentCustom LLM AgentThis notebook goes through how to create your own custom LLM agent.An LLM agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doLLM: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user.
4e9727215e95-3136
This should be used to end an agent run.import { LLMSingleActionAgent, AgentActionOutputParser, AgentExecutor,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { BasePromptTemplate, BaseStringPromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { InputValues, PartialValues, AgentStep, AgentAction, AgentFinish,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseStringPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string {
4e9727215e95-3137
throw new Error("Not implemented"); } format(input: InputValues): Promise<string> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = input.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...input }; /** Format the template. / return Promise.resolve(renderTemplate(template, "f-string", newInput)); } partial(_values: PartialValues): Promise<BaseStringPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3138
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3139
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};API Reference:LLMSingleActionAgent from langchain/agentsAgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiBasePromptTemplate from langchain/promptsBaseStringPromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsInputValues from langchain/schemaPartialValues from langchain/schemaAgentStep from langchain/schemaAgentAction from langchain/schemaAgentFinish from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCancelling requestsNextCustom LLM Agent (with a ChatModel)
4e9727215e95-3140
Custom LLM AgentThis notebook goes through how to create your own custom LLM agent.An LLM agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doLLM: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user.
4e9727215e95-3141
This should be used to end an agent run.import { LLMSingleActionAgent, AgentActionOutputParser, AgentExecutor,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { BasePromptTemplate, BaseStringPromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { InputValues, PartialValues, AgentStep, AgentAction, AgentFinish,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseStringPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string {
4e9727215e95-3142
throw new Error("Not implemented"); } format(input: InputValues): Promise<string> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = input.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...input }; /** Format the template. / return Promise.resolve(renderTemplate(template, "f-string", newInput)); } partial(_values: PartialValues): Promise<BaseStringPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3143
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3144
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};API Reference:LLMSingleActionAgent from langchain/agentsAgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiBasePromptTemplate from langchain/promptsBaseStringPromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsInputValues from langchain/schemaPartialValues from langchain/schemaAgentStep from langchain/schemaAgentAction from langchain/schemaAgentFinish from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculator This notebook goes through how to create your own custom LLM agent. An LLM agent consists of three parts: The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that: AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc). AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.
4e9727215e95-3145
import { LLMSingleActionAgent, AgentActionOutputParser, AgentExecutor,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { BasePromptTemplate, BaseStringPromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { InputValues, PartialValues, AgentStep, AgentAction, AgentFinish,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseStringPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string {
4e9727215e95-3146
throw new Error("Not implemented"); } format(input: InputValues): Promise<string> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = input.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...input }; /** Format the template. / return Promise.resolve(renderTemplate(template, "f-string", newInput)); } partial(_values: PartialValues): Promise<BaseStringPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3147
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new OpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};
4e9727215e95-3148
API Reference:LLMSingleActionAgent from langchain/agentsAgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiBasePromptTemplate from langchain/promptsBaseStringPromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsInputValues from langchain/schemaPartialValues from langchain/schemaAgentStep from langchain/schemaAgentAction from langchain/schemaAgentFinish from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculator Custom LLM Agent (with a ChatModel) Page Title: Custom LLM Agent (with a ChatModel) | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-3149
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCustom LLM Agent (with a ChatModel)Custom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input.
4e9727215e95-3150
action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.import { AgentActionOutputParser, AgentExecutor, LLMSingleActionAgent,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { BaseChatPromptTemplate, BasePromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { AgentAction, AgentFinish, AgentStep, BaseMessage, HumanMessage, InputValues, PartialValues,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can.
4e9727215e95-3151
You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseChatPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string { throw new
4e9727215e95-3152
Error("Not implemented"); } async formatMessages(values: InputValues): Promise<BaseMessage[]> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = values.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...values }; /** Format the template. / const formatted = renderTemplate(template, "f-string", newInput); return [new HumanMessage(formatted)]; } partial(_values: PartialValues): Promise<BaseChatPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent_chat"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3153
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3154
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};run();API Reference:AgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMSingleActionAgent from langchain/agentsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiBaseChatPromptTemplate from langchain/promptsBasePromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsAgentAction from langchain/schemaAgentFinish from langchain/schemaAgentStep from langchain/schemaBaseMessage from langchain/schemaHumanMessage from langchain/schemaInputValues from langchain/schemaPartialValues from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCustom LLM AgentNextLogging and tracingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-3155
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toCustom LLM Agent (with a ChatModel)Custom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool.
4e9727215e95-3156
log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user. This should be used to end an agent run.import { AgentActionOutputParser, AgentExecutor, LLMSingleActionAgent,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { BaseChatPromptTemplate, BasePromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { AgentAction, AgentFinish, AgentStep, BaseMessage, HumanMessage, InputValues, PartialValues,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can.
4e9727215e95-3157
You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseChatPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string { throw new
4e9727215e95-3158
Error("Not implemented"); } async formatMessages(values: InputValues): Promise<BaseMessage[]> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = values.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...values }; /** Format the template. / const formatted = renderTemplate(template, "f-string", newInput); return [new HumanMessage(formatted)]; } partial(_values: PartialValues): Promise<BaseChatPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent_chat"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3159
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3160
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};run();API Reference:AgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMSingleActionAgent from langchain/agentsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiBaseChatPromptTemplate from langchain/promptsBasePromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsAgentAction from langchain/schemaAgentFinish from langchain/schemaAgentStep from langchain/schemaBaseMessage from langchain/schemaHumanMessage from langchain/schemaInputValues from langchain/schemaPartialValues from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCustom LLM AgentNextLogging and tracing
4e9727215e95-3161
ModulesAgentsHow-toCustom LLM Agent (with a ChatModel)Custom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user.
4e9727215e95-3162
This should be used to end an agent run.import { AgentActionOutputParser, AgentExecutor, LLMSingleActionAgent,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { BaseChatPromptTemplate, BasePromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { AgentAction, AgentFinish, AgentStep, BaseMessage, HumanMessage, InputValues, PartialValues,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseChatPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string { throw new
4e9727215e95-3163
Error("Not implemented"); } async formatMessages(values: InputValues): Promise<BaseMessage[]> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = values.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...values }; /** Format the template. / const formatted = renderTemplate(template, "f-string", newInput); return [new HumanMessage(formatted)]; } partial(_values: PartialValues): Promise<BaseChatPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent_chat"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3164
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3165
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};run();API Reference:AgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMSingleActionAgent from langchain/agentsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiBaseChatPromptTemplate from langchain/promptsBasePromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsAgentAction from langchain/schemaAgentFinish from langchain/schemaAgentStep from langchain/schemaBaseMessage from langchain/schemaHumanMessage from langchain/schemaInputValues from langchain/schemaPartialValues from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculatorPreviousCustom LLM AgentNextLogging and tracing
4e9727215e95-3166
Custom LLM Agent (with a ChatModel)This notebook goes through how to create your own custom agent based on a chat model.An LLM chat agent consists of three parts:PromptTemplate: This is the prompt template that can be used to instruct the language model on what to doChatModel: This is the language model that powers the agentstop sequence: Instructs the LLM to stop generating as soon as this string is foundOutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish objectThe LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:Passes user input and any previous steps to the Agent (in this case, the LLMAgent)If the Agent returns an AgentFinish, then return that directly to the userIf the Agent returns an AgentAction, then use that to call a tool and get an ObservationRepeat, passing the AgentAction and Observation back to the Agent until an AgentFinish is emitted.AgentAction is a response that consists of action and action_input. action refers to which tool to use, and action_input refers to the input to that tool. log can also be provided as more context (that can be used for logging, tracing, etc).AgentFinish is a response that contains the final message to be sent back to the user.
4e9727215e95-3167
This should be used to end an agent run.import { AgentActionOutputParser, AgentExecutor, LLMSingleActionAgent,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { BaseChatPromptTemplate, BasePromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { AgentAction, AgentFinish, AgentStep, BaseMessage, HumanMessage, InputValues, PartialValues,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can. You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseChatPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string { throw new
4e9727215e95-3168
Error("Not implemented"); } async formatMessages(values: InputValues): Promise<BaseMessage[]> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = values.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...values }; /** Format the template. / const formatted = renderTemplate(template, "f-string", newInput); return [new HumanMessage(formatted)]; } partial(_values: PartialValues): Promise<BaseChatPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent_chat"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3169
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3170
What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};run();API Reference:AgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMSingleActionAgent from langchain/agentsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiBaseChatPromptTemplate from langchain/promptsBasePromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsAgentAction from langchain/schemaAgentFinish from langchain/schemaAgentStep from langchain/schemaBaseMessage from langchain/schemaHumanMessage from langchain/schemaInputValues from langchain/schemaPartialValues from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculator This notebook goes through how to create your own custom agent based on a chat model. An LLM chat agent consists of three parts: import { AgentActionOutputParser, AgentExecutor, LLMSingleActionAgent,} from "langchain/agents";import { LLMChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import { BaseChatPromptTemplate, BasePromptTemplate, SerializedBasePromptTemplate, renderTemplate,} from "langchain/prompts";import { AgentAction, AgentFinish, AgentStep, BaseMessage, HumanMessage, InputValues, PartialValues,} from "langchain/schema";import { SerpAPI, Tool } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const PREFIX = `Answer the following questions as best you can.
4e9727215e95-3171
You have access to the following tools:`;const formatInstructions = ( toolNames: string) => `Use the following format in your response:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [${toolNames}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input question`;const SUFFIX = `Begin!Question: {input}Thought:{agent_scratchpad}`;class CustomPromptTemplate extends BaseChatPromptTemplate { tools: Tool[]; constructor(args: { tools: Tool[]; inputVariables: string[] }) { super({ inputVariables: args.inputVariables }); this.tools = args.tools; } _getPromptType(): string { throw new
4e9727215e95-3172
Error("Not implemented"); } async formatMessages(values: InputValues): Promise<BaseMessage[]> { /** Construct the final template */ const toolStrings = this.tools .map((tool) => `${tool.name}: ${tool.description}`) .join("\n"); const toolNames = this.tools.map((tool) => tool.name).join("\n"); const instructions = formatInstructions(toolNames); const template = [PREFIX, toolStrings, instructions, SUFFIX].join("\n\n"); /** Construct the agent_scratchpad */ const intermediateSteps = values.intermediate_steps as AgentStep[]; const agentScratchpad = intermediateSteps.reduce( (thoughts, { action, observation }) => thoughts + [action.log, `\nObservation: ${observation}`, "Thought:"].join("\n"), "" ); const newInput = { agent_scratchpad: agentScratchpad, ...values }; /** Format the template. / const formatted = renderTemplate(template, "f-string", newInput); return [new HumanMessage(formatted)]; } partial(_values: PartialValues): Promise<BaseChatPromptTemplate> { throw new Error("Not implemented"); } serialize(): SerializedBasePromptTemplate { throw new Error("Not implemented"); }}class CustomOutputParser extends AgentActionOutputParser { lc_namespace = ["langchain", "agents", "custom_llm_agent_chat"]; async parse(text: string): Promise<AgentAction | AgentFinish> { if (text.includes("Final Answer:")) { const parts = text.split("Final Answer:"); const input = parts[parts.length - 1].trim(); const finalAnswers = { output: input }; return { log: text, returnValues: finalAnswers }; } const match = /Action: (. *)\nAction Input: (.
4e9727215e95-3173
)/s.exec(text); if (!match) { throw new Error(`Could not parse LLM output: ${text}`); } return { tool: match[1].trim(), toolInput: match[2].trim().replace(/^"+|"+$/g, ""), log: text, }; } getFormatInstructions(): string { throw new Error("Not implemented"); }}export const run = async () => { const model = new ChatOpenAI({ temperature: 0 }); const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(), ]; const llmChain = new LLMChain({ prompt: new CustomPromptTemplate({ tools, inputVariables: ["input", "agent_scratchpad"], }), llm: model, }); const agent = new LLMSingleActionAgent({ llmChain, outputParser: new CustomOutputParser(), stop: ["\nObservation"], }); const executor = new AgentExecutor({ agent, tools, }); console.log("Loaded agent. "); const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`; console.log(`Executing with input "${input}"...`); const result = await executor.call({ input }); console.log(`Got output ${result.output}`);};run();
4e9727215e95-3174
API Reference:AgentActionOutputParser from langchain/agentsAgentExecutor from langchain/agentsLLMSingleActionAgent from langchain/agentsLLMChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiBaseChatPromptTemplate from langchain/promptsBasePromptTemplate from langchain/promptsSerializedBasePromptTemplate from langchain/promptsrenderTemplate from langchain/promptsAgentAction from langchain/schemaAgentFinish from langchain/schemaAgentStep from langchain/schemaBaseMessage from langchain/schemaHumanMessage from langchain/schemaInputValues from langchain/schemaPartialValues from langchain/schemaSerpAPI from langchain/toolsTool from langchain/toolsCalculator from langchain/tools/calculator Logging and tracing Page Title: Logging and tracing | 🦜️🔗 Langchain Paragraphs:
4e9727215e95-3175
Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toLogging and tracingLogging and tracingYou can pass the verbose flag when creating an agent to enable logging of all events to the console. For example:You can also enable tracing by setting the LANGCHAIN_TRACING environment variable to true.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3176
What is his current age raised to the 0.23 power?`;const result = await executor.call({ input });API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculator[chain/start] [1:chain:agent_executor] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? "}[chain/start] [1:chain:agent_executor > 2:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? ", "agent_scratchpad": "", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 2:chain:llm_chain > 3:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3177
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend?
4e9727215e95-3178
What is his current age raised to the 0.23 power?\nThought:" ]}[llm/end] [1:chain:agent_executor > 2:chain:llm_chain > 3:llm:openai] [3.52s] Exiting LLM run with output: { "generations": [ [ { "text": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 39, "promptTokens": 220, "totalTokens": 259 } }}[chain/end] [1:chain:agent_executor > 2:chain:llm_chain] [3.53s] Exiting Chain run with output: { "text": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\""}[agent/action] [1:chain:agent_executor] Agent selected action: { "tool": "search", "toolInput": "Olivia Wilde boyfriend", "log": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction
4e9727215e95-3179
and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\""}[tool/start] [1:chain:agent_executor > 4:tool:search] Entering Tool run with input: "Olivia Wilde boyfriend"[tool/end] [1:chain:agent_executor > 4:tool:search] [845ms] Exiting Tool run with output: "In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3180
Their relationship ended in November 2022. "[chain/start] [1:chain:agent_executor > 5:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? ", "agent_scratchpad": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.\nThought:", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 5:chain:llm_chain > 6:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3181
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\nThought: I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3182
Their relationship ended in November 2022.\nThought:" ]}[llm/end] [1:chain:agent_executor > 5:chain:llm_chain > 6:llm:openai] [3.65s] Exiting LLM run with output: { "generations": [ [ { "text": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 23, "promptTokens": 296, "totalTokens": 319 } }}[chain/end] [1:chain:agent_executor > 5:chain:llm_chain] [3.65s] Exiting Chain run with output: { "text": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\""}[agent/action] [1:chain:agent_executor] Agent selected action: { "tool": "search", "toolInput": "Harry Styles age", "log": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\""}[tool/start] [1:chain:agent_executor > 7:tool:search] Entering Tool run with input: "Harry Styles age"[tool/end] [1:chain:agent_executor >
4e9727215e95-3183
run with input: "Harry Styles age"[tool/end] [1:chain:agent_executor > 7:tool:search] [632ms] Exiting Tool run with output: "29 years"[chain/start] [1:chain:agent_executor > 8:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?
4e9727215e95-3184
", "agent_scratchpad": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.\nThought: I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought:", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 8:chain:llm_chain > 9:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3185
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\nThought: I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3186
Their relationship ended in November 2022.\nThought: I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought:" ]}[llm/end] [1:chain:agent_executor > 8:chain:llm_chain > 9:llm:openai] [2.72s] Exiting LLM run with output: { "generations": [ [ { "text": " I need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction Input: 29^0.23", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 26, "promptTokens": 329, "totalTokens": 355 } }}[chain/end] [1:chain:agent_executor > 8:chain:llm_chain] [2.72s] Exiting Chain run with output: { "text": " I need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction Input: 29^0.23"}[agent/action] [1:chain:agent_executor] Agent selected action: { "tool": "calculator", "toolInput": "29^0.23", "log": " I need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction
4e9727215e95-3187
need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction Input: 29^0.23"}[tool/start] [1:chain:agent_executor > 10:tool:calculator] Entering Tool run with input: "29^0.23"[tool/end] [1:chain:agent_executor > 10:tool:calculator] [3ms] Exiting Tool run with output: "2.169459462491557"[chain/start] [1:chain:agent_executor > 11:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?
4e9727215e95-3188
", "agent_scratchpad": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.\nThought: I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought: I need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction Input: 29^0.23\nObservation: 2.169459462491557\nThought:", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 11:chain:llm_chain > 12:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3189
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\nThought: I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3190
Their relationship ended in November 2022.\nThought: I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"\nObservation: 29 years\nThought: I need to calculate 29 raised to the 0.23 power.\nAction: calculator\nAction Input: 29^0.23\nObservation: 2.169459462491557\nThought:" ]}[llm/end] [1:chain:agent_executor > 11:chain:llm_chain > 12:llm:openai] [3.51s] Exiting LLM run with output: { "generations": [ [ { "text": " I now know the final answer.\nFinal Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557. ", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 39, "promptTokens": 371, "totalTokens": 410 } }}[chain/end] [1:chain:agent_executor > 11:chain:llm_chain] [3.51s] Exiting Chain run with output: { "text": " I now know the final answer.\nFinal Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is
4e9727215e95-3191
Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557. "}[chain/end] [1:chain:agent_executor] [14.90s] Exiting Chain run with output: { "output": "Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557. "}PreviousCustom LLM Agent (with a ChatModel)NextAdding a timeoutCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4e9727215e95-3192
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toSubscribing to eventsCancelling requestsCustom LLM AgentCustom LLM Agent (with a ChatModel)Logging and tracingAdding a timeoutToolsToolkitsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesAgentsHow-toLogging and tracingLogging and tracingYou can pass the verbose flag when creating an agent to enable logging of all events to the console. For example:You can also enable tracing by setting the LANGCHAIN_TRACING environment variable to true.import { initializeAgentExecutorWithOptions } from "langchain/agents";import { OpenAI } from "langchain/llms/openai";import { SerpAPI } from "langchain/tools";import { Calculator } from "langchain/tools/calculator";const model = new OpenAI({ temperature: 0 });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", verbose: true,});const input = `Who is Olivia Wilde's boyfriend?
4e9727215e95-3193
What is his current age raised to the 0.23 power?`;const result = await executor.call({ input });API Reference:initializeAgentExecutorWithOptions from langchain/agentsOpenAI from langchain/llms/openaiSerpAPI from langchain/toolsCalculator from langchain/tools/calculator[chain/start] [1:chain:agent_executor] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? "}[chain/start] [1:chain:agent_executor > 2:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? ", "agent_scratchpad": "", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 2:chain:llm_chain > 3:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3194
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend?
4e9727215e95-3195
What is his current age raised to the 0.23 power?\nThought:" ]}[llm/end] [1:chain:agent_executor > 2:chain:llm_chain > 3:llm:openai] [3.52s] Exiting LLM run with output: { "generations": [ [ { "text": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 39, "promptTokens": 220, "totalTokens": 259 } }}[chain/end] [1:chain:agent_executor > 2:chain:llm_chain] [3.53s] Exiting Chain run with output: { "text": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\""}[agent/action] [1:chain:agent_executor] Agent selected action: { "tool": "search", "toolInput": "Olivia Wilde boyfriend", "log": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction
4e9727215e95-3196
and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\""}[tool/start] [1:chain:agent_executor > 4:tool:search] Entering Tool run with input: "Olivia Wilde boyfriend"[tool/end] [1:chain:agent_executor > 4:tool:search] [845ms] Exiting Tool run with output: "In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3197
Their relationship ended in November 2022. "[chain/start] [1:chain:agent_executor > 5:chain:llm_chain] Entering Chain run with input: { "input": "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? ", "agent_scratchpad": " I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Their relationship ended in November 2022.\nThought:", "stop": [ "\nObservation: " ]}[llm/start] [1:chain:agent_executor > 5:chain:llm_chain > 6:llm:openai] Entering LLM run with input: { "prompts": [ "Answer the following questions as best you can. You have access to the following tools:\n\nsearch: a search engine. useful for when you need to answer questions about current events. input should be a search query.\ncalculator: Useful for getting the result of a math expression.
4e9727215e95-3198
The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [search,calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\nThought: I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\nAction: search\nAction Input: \"Olivia Wilde boyfriend\"\nObservation: In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
4e9727215e95-3199
Their relationship ended in November 2022.\nThought:" ]}[llm/end] [1:chain:agent_executor > 5:chain:llm_chain > 6:llm:openai] [3.65s] Exiting LLM run with output: { "generations": [ [ { "text": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\"", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 23, "promptTokens": 296, "totalTokens": 319 } }}[chain/end] [1:chain:agent_executor > 5:chain:llm_chain] [3.65s] Exiting Chain run with output: { "text": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\""}[agent/action] [1:chain:agent_executor] Agent selected action: { "tool": "search", "toolInput": "Harry Styles age", "log": " I need to find out Harry Styles' age.\nAction: search\nAction Input: \"Harry Styles age\""}[tool/start] [1:chain:agent_executor > 7:tool:search] Entering Tool run with input: "Harry Styles age"[tool/end] [1:chain:agent_executor >