Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
4,300
' Cost'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '-'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'Push'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' In'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'flation'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' Theory'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ':'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' This'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' theory'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' suggests'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' that high unemployment'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' leads'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' to'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' higher'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' wages'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ','}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' which'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' in'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' turn'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' leads'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' to'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' higher'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' prices'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' and higher inflation'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '.'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''})
Head to Integrations for documentation on built-in integrations with LLM providers.
Head to Integrations for documentation on built-in integrations with LLM providers. ->: ' Cost'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '-'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'Push'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' In'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': 'flation'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' Theory'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ':'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' This'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' theory'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' suggests'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' that high unemployment'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' leads'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' to'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' higher'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' wages'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ','}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' which'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' in'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' turn'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' leads'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' to'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' higher'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' prices'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ' and higher inflation'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': '.'}) RunLogPatch({'op': 'add', 'path': '/streamed_output/-', 'value': ''})
4,301
'path': '/streamed_output/-', 'value': ''}) RunLogPatch({'op': 'replace', 'path': '/final_output', 'value': {'generations': [[{'generation_info': {'finish_reason': 'stop', 'logprobs': None}, 'text': '\n' '\n' '1. The Phillips Curve: This theory ' 'suggests that there is an inverse ' 'relationship between unemployment and ' 'inflation. When unemployment is low, ' 'inflation tends to be high, and when ' 'unemployment is high, inflation tends ' 'to be low. \n' '\n' '2. The NAIRU Theory: This theory ' 'suggests that there is a natural rate ' 'of unemployment, also known as the ' 'Non-Accelerating Inflation Rate of ' 'Unemployment (NAIRU). According to this ' 'theory, when unemployment is below the ' 'NAIRU, then inflation will increase, ' 'and when unemployment is above the ' 'NAIRU, then inflation will decrease.\n' '\n' '3. The Cost-Push Inflation Theory: This ' 'theory suggests that high unemployment ' 'leads to higher wages, which in turn '
Head to Integrations for documentation on built-in integrations with LLM providers.
Head to Integrations for documentation on built-in integrations with LLM providers. ->: 'path': '/streamed_output/-', 'value': ''}) RunLogPatch({'op': 'replace', 'path': '/final_output', 'value': {'generations': [[{'generation_info': {'finish_reason': 'stop', 'logprobs': None}, 'text': '\n' '\n' '1. The Phillips Curve: This theory ' 'suggests that there is an inverse ' 'relationship between unemployment and ' 'inflation. When unemployment is low, ' 'inflation tends to be high, and when ' 'unemployment is high, inflation tends ' 'to be low. \n' '\n' '2. The NAIRU Theory: This theory ' 'suggests that there is a natural rate ' 'of unemployment, also known as the ' 'Non-Accelerating Inflation Rate of ' 'Unemployment (NAIRU). According to this ' 'theory, when unemployment is below the ' 'NAIRU, then inflation will increase, ' 'and when unemployment is above the ' 'NAIRU, then inflation will decrease.\n' '\n' '3. The Cost-Push Inflation Theory: This ' 'theory suggests that high unemployment ' 'leads to higher wages, which in turn '
4,302
turn ' 'leads to higher prices and higher ' 'inflation.'}]], 'llm_output': None, 'run': None}})__call__: string in -> string out​The simplest way to use an LLM is a callable: pass in a string, get a string completion.llm("Tell me a joke") '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!'generate: batch calls, richer outputs​generate lets you call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)len(llm_result.generations) 30llm_result.generations[0] [Generation(text="\n\nQ: Why don't scientists trust atoms?\nA: Because they make up everything!", generation_info={'finish_reason': 'stop', 'logprobs': None})]llm_result.generations[-1] [Generation(text='\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you!', generation_info={'finish_reason': 'stop', 'logprobs': None})]You can also access provider specific information that is returned. This information is not standardized across providers.llm_result.llm_output {'token_usage': {'completion_tokens': 900, 'total_tokens': 1020, 'prompt_tokens': 120}, 'model_name': 'text-davinci-003'}PreviousLanguage modelsNextAsync APIGet startedSetupLCEL__call__: string in -> string outgenerate: batch calls, richer outputsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Head to Integrations for documentation on built-in integrations with LLM providers.
Head to Integrations for documentation on built-in integrations with LLM providers. ->: turn ' 'leads to higher prices and higher ' 'inflation.'}]], 'llm_output': None, 'run': None}})__call__: string in -> string out​The simplest way to use an LLM is a callable: pass in a string, get a string completion.llm("Tell me a joke") '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!'generate: batch calls, richer outputs​generate lets you call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)len(llm_result.generations) 30llm_result.generations[0] [Generation(text="\n\nQ: Why don't scientists trust atoms?\nA: Because they make up everything!", generation_info={'finish_reason': 'stop', 'logprobs': None})]llm_result.generations[-1] [Generation(text='\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you!', generation_info={'finish_reason': 'stop', 'logprobs': None})]You can also access provider specific information that is returned. This information is not standardized across providers.llm_result.llm_output {'token_usage': {'completion_tokens': 900, 'total_tokens': 1020, 'prompt_tokens': 120}, 'model_name': 'text-davinci-003'}PreviousLanguage modelsNextAsync APIGet startedSetupLCEL__call__: string in -> string outgenerate: batch calls, richer outputsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,303
Prompt templates | 🦜️🔗 Langchain
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: Prompt templates | 🦜️🔗 Langchain
4,304
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew-shot examples for chat modelsFormat template outputTemplate formatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt pipeliningValidate templateExample selectorsLanguage modelsOutput parsersRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesPrompt templates are pre-defined recipes for generating prompts for language models.A template may include instructions, few-shot examples, and specific context and questions appropriate for a given task.LangChain provides tooling to create and work with prompt templates.LangChain strives to create model agnostic templates to make it easy to reuse existing templates across different language models.Typically, language models expect the prompt to either be a string or else a list of chat messages.PromptTemplate​Use PromptTemplate to create a template for a string prompt.By default, PromptTemplate uses Python's str.format syntax for templating.from langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template( "Tell me a {adjective} joke about {content}.")prompt_template.format(adjective="funny", content="chickens") 'Tell me a funny joke about chickens.'The template supports any number of variables, including no variables:nfrom langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke")prompt_template.format() 'Tell me a joke'For additional validation, specify input_variables explicitly. These variables
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew-shot examples for chat modelsFormat template outputTemplate formatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationPrompt pipeliningValidate templateExample selectorsLanguage modelsOutput parsersRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesPrompt templates are pre-defined recipes for generating prompts for language models.A template may include instructions, few-shot examples, and specific context and questions appropriate for a given task.LangChain provides tooling to create and work with prompt templates.LangChain strives to create model agnostic templates to make it easy to reuse existing templates across different language models.Typically, language models expect the prompt to either be a string or else a list of chat messages.PromptTemplate​Use PromptTemplate to create a template for a string prompt.By default, PromptTemplate uses Python's str.format syntax for templating.from langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template( "Tell me a {adjective} joke about {content}.")prompt_template.format(adjective="funny", content="chickens") 'Tell me a funny joke about chickens.'The template supports any number of variables, including no variables:nfrom langchain.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke")prompt_template.format() 'Tell me a joke'For additional validation, specify input_variables explicitly. These variables
4,305
will be compared against the variables present in the template string during instantiation, raising an exception if there is a mismatch. For example:from langchain.prompts import PromptTemplateinvalid_prompt = PromptTemplate( input_variables=["adjective"], template="Tell me a {adjective} joke about {content}.") --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[19], line 3 1 from langchain.prompts import PromptTemplate ----> 3 invalid_prompt = PromptTemplate( 4 input_variables=["adjective"], 5 template="Tell me a {adjective} joke about {content}." 6 ) File ~/langchain/libs/langchain/langchain/load/serializable.py:97, in Serializable.__init__(self, **kwargs) 96 def __init__(self, **kwargs: Any) -> None: ---> 97 super().__init__(**kwargs) 98 self._lc_kwargs = kwargs File ~/langchain/.venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for PromptTemplate __root__ Invalid prompt schema; check for mismatched or missing input parameters. 'content' (type=value_error)You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.ChatPromptTemplate‚ÄãThe prompt to chat models is a list of chat messages.Each chat message is associated with content, and an additional parameter called role.
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: will be compared against the variables present in the template string during instantiation, raising an exception if there is a mismatch. For example:from langchain.prompts import PromptTemplateinvalid_prompt = PromptTemplate( input_variables=["adjective"], template="Tell me a {adjective} joke about {content}.") --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[19], line 3 1 from langchain.prompts import PromptTemplate ----> 3 invalid_prompt = PromptTemplate( 4 input_variables=["adjective"], 5 template="Tell me a {adjective} joke about {content}." 6 ) File ~/langchain/libs/langchain/langchain/load/serializable.py:97, in Serializable.__init__(self, **kwargs) 96 def __init__(self, **kwargs: Any) -> None: ---> 97 super().__init__(**kwargs) 98 self._lc_kwargs = kwargs File ~/langchain/.venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for PromptTemplate __root__ Invalid prompt schema; check for mismatched or missing input parameters. 'content' (type=value_error)You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.ChatPromptTemplate‚ÄãThe prompt to chat models is a list of chat messages.Each chat message is associated with content, and an additional parameter called role.
4,306
For example, in the OpenAI Chat Completions API, a chat message can be associated with an AI assistant, a human or a system role.Create a chat prompt template like this:from langchain.prompts import ChatPromptTemplatechat_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful AI bot. Your name is {name}."), ("human", "Hello, how are you doing?"), ("ai", "I'm doing well, thanks!"), ("human", "{user_input}"),])messages = chat_template.format_messages( name="Bob", user_input="What is your name?")ChatPromptTemplate.from_messages accepts a variety of message representations.For example, in addition to using the 2-tuple representation of (type, content) used
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: For example, in the OpenAI Chat Completions API, a chat message can be associated with an AI assistant, a human or a system role.Create a chat prompt template like this:from langchain.prompts import ChatPromptTemplatechat_template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful AI bot. Your name is {name}."), ("human", "Hello, how are you doing?"), ("ai", "I'm doing well, thanks!"), ("human", "{user_input}"),])messages = chat_template.format_messages( name="Bob", user_input="What is your name?")ChatPromptTemplate.from_messages accepts a variety of message representations.For example, in addition to using the 2-tuple representation of (type, content) used
4,307
above, you could pass in an instance of MessagePromptTemplate or BaseMessage.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import HumanMessagePromptTemplatefrom langchain.schema.messages import SystemMessagechat_template = ChatPromptTemplate.from_messages( [ SystemMessage( content=( "You are a helpful assistant that re-writes the user's text to " "sound more upbeat." ) ), HumanMessagePromptTemplate.from_template("{text}"), ])llm = ChatOpenAI()llm(chat_template.format_messages(text='i dont like eating tasty things.')) AIMessage(content='I absolutely love indulging in delicious treats!')This provides you with a lot of flexibility in how you construct your chat prompts.LCEL‚ÄãPromptTemplate and ChatPromptTemplate implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.PromptTemplate accepts a dictionary (of the prompt variables) and returns a StringPromptValue. A ChatPromptTemplate accepts a dictionary and returns a ChatPromptValue.prompt_val = prompt_template.invoke({"adjective": "funny", "content": "chickens"})prompt_val StringPromptValue(text='Tell me a joke')prompt_val.to_string() 'Tell me a joke'prompt_val.to_messages() [HumanMessage(content='Tell me a joke')]chat_val = chat_template.invoke({"text": 'i dont like eating tasty things.'})chat_val.to_messages() [SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."), HumanMessage(content='i dont like eating tasty things.')]chat_val.to_string() "System: You are a helpful assistant that re-writes the user's text to sound more upbeat.\nHuman: i dont like eating tasty things."PreviousPromptsNextConnecting to a Feature
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: above, you could pass in an instance of MessagePromptTemplate or BaseMessage.from langchain.chat_models import ChatOpenAIfrom langchain.prompts import HumanMessagePromptTemplatefrom langchain.schema.messages import SystemMessagechat_template = ChatPromptTemplate.from_messages( [ SystemMessage( content=( "You are a helpful assistant that re-writes the user's text to " "sound more upbeat." ) ), HumanMessagePromptTemplate.from_template("{text}"), ])llm = ChatOpenAI()llm(chat_template.format_messages(text='i dont like eating tasty things.')) AIMessage(content='I absolutely love indulging in delicious treats!')This provides you with a lot of flexibility in how you construct your chat prompts.LCEL‚ÄãPromptTemplate and ChatPromptTemplate implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.PromptTemplate accepts a dictionary (of the prompt variables) and returns a StringPromptValue. A ChatPromptTemplate accepts a dictionary and returns a ChatPromptValue.prompt_val = prompt_template.invoke({"adjective": "funny", "content": "chickens"})prompt_val StringPromptValue(text='Tell me a joke')prompt_val.to_string() 'Tell me a joke'prompt_val.to_messages() [HumanMessage(content='Tell me a joke')]chat_val = chat_template.invoke({"text": 'i dont like eating tasty things.'})chat_val.to_messages() [SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."), HumanMessage(content='i dont like eating tasty things.')]chat_val.to_string() "System: You are a helpful assistant that re-writes the user's text to sound more upbeat.\nHuman: i dont like eating tasty things."PreviousPromptsNextConnecting to a Feature
4,308
to a Feature StorePromptTemplateChatPromptTemplateLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Prompt templates are pre-defined recipes for generating prompts for language models.
Prompt templates are pre-defined recipes for generating prompts for language models. ->: to a Feature StorePromptTemplateChatPromptTemplateLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,309
Retrieval-augmented generation (RAG) | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: Retrieval-augmented generation (RAG) | 🦜️🔗 Langchain
4,310
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)On this pageRetrieval-augmented generation (RAG)Use case​Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this.In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:QA over structured data (e.g., SQL)QA over code (e.g., Python)Overview​The pipeline for converting raw unstructured data into a QA chain looks like this:Loading: First we need to load our data. Use the LangChain integration hub to browse the full set of loaders. Splitting: Text splitters break Documents into splits of specified sizeStorage: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval: The app retrieves splits from storage (e.g., often with similar embeddings to the input question)Generation: An LLM produces an answer using a prompt that includes the question and the retrieved dataQuickstart​Suppose we want a QA app over this blog post. We can create this in a few lines of code. First set environment variables and install packages:pip install langchain openai chromadb langchainhub# Set env var OPENAI_API_KEY or load from a .env file# import dotenv#
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)On this pageRetrieval-augmented generation (RAG)Use case​Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this.In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:QA over structured data (e.g., SQL)QA over code (e.g., Python)Overview​The pipeline for converting raw unstructured data into a QA chain looks like this:Loading: First we need to load our data. Use the LangChain integration hub to browse the full set of loaders. Splitting: Text splitters break Documents into splits of specified sizeStorage: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval: The app retrieves splits from storage (e.g., often with similar embeddings to the input question)Generation: An LLM produces an answer using a prompt that includes the question and the retrieved dataQuickstart​Suppose we want a QA app over this blog post. We can create this in a few lines of code. First set environment variables and install packages:pip install langchain openai chromadb langchainhub# Set env var OPENAI_API_KEY or load from a .env file# import dotenv#
4,311
or load from a .env file# import dotenv# dotenv.load_dotenv()# Load documentsfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")# Split documentsfrom langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)splits = text_splitter.split_documents(loader.load())# Embed and store splitsfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsvectorstore = Chroma.from_documents(documents=splits,embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# Prompt # https://smith.langchain.com/hub/rlm/rag-promptfrom langchain import hubrag_prompt = hub.pull("rlm/rag-prompt")# LLMfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)# RAG chain from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Here is the LangSmith trace for this chain.Below we will explain each step in more detail.Step 1. Load‚ÄãSpecify a DocumentLoader to load in your unstructured data as Documents. A Document is a dict with text (page_content) and metadata.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Go deeper‚ÄãBrowse the > 160 data loader integrations here.See further documentation on loaders here.Step 2. Split‚ÄãSplit the Document into chunks for embedding and vector storage.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size =
Open In Colab
Open In Colab ->: or load from a .env file# import dotenv# dotenv.load_dotenv()# Load documentsfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")# Split documentsfrom langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)splits = text_splitter.split_documents(loader.load())# Embed and store splitsfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsvectorstore = Chroma.from_documents(documents=splits,embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# Prompt # https://smith.langchain.com/hub/rlm/rag-promptfrom langchain import hubrag_prompt = hub.pull("rlm/rag-prompt")# LLMfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)# RAG chain from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Here is the LangSmith trace for this chain.Below we will explain each step in more detail.Step 1. Load‚ÄãSpecify a DocumentLoader to load in your unstructured data as Documents. A Document is a dict with text (page_content) and metadata.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Go deeper‚ÄãBrowse the > 160 data loader integrations here.See further documentation on loaders here.Step 2. Split‚ÄãSplit the Document into chunks for embedding and vector storage.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size =
4,312
= RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)all_splits = text_splitter.split_documents(data)Go deeper‚ÄãDocumentSplitters are just one type of the more generic DocumentTransformers.See further documentation on transformers here.Context-aware splitters keep the location ("context") of each split in the original Document:Markdown filesCode (py or js)DocumentsStep 3. Store‚ÄãTo be able to look up our document splits, we first need to store them where we can later look them up.The most common way to do this is to embed the contents of each document split.We store the embedding and splits in a vectorstore.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Go deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.Here are Steps 1-3:Step 4. Retrieve‚ÄãRetrieve relevant splits for any question using similarity search.This is simply "top K" retrieval where we select documents based on embedding similarity to the query.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4Go deeper‚ÄãVectorstores are commonly used for retrieval, but they are not the only option. For example, SVMs (see thread here) can also be used.LangChain has many retrievers including, but not limited to, vectorstores. All retrievers implement a common method get_relevant_documents() (and its asynchronous variant aget_relevant_documents()).from langchain.retrievers import SVMRetrieversvm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())docs_svm=svm_retriever.get_relevant_documents(question)len(docs_svm) 4Some common ways to improve on vector similarity search include:MultiQueryRetriever generates variants of the input
Open In Colab
Open In Colab ->: = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)all_splits = text_splitter.split_documents(data)Go deeper‚ÄãDocumentSplitters are just one type of the more generic DocumentTransformers.See further documentation on transformers here.Context-aware splitters keep the location ("context") of each split in the original Document:Markdown filesCode (py or js)DocumentsStep 3. Store‚ÄãTo be able to look up our document splits, we first need to store them where we can later look them up.The most common way to do this is to embed the contents of each document split.We store the embedding and splits in a vectorstore.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Go deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.Here are Steps 1-3:Step 4. Retrieve‚ÄãRetrieve relevant splits for any question using similarity search.This is simply "top K" retrieval where we select documents based on embedding similarity to the query.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4Go deeper‚ÄãVectorstores are commonly used for retrieval, but they are not the only option. For example, SVMs (see thread here) can also be used.LangChain has many retrievers including, but not limited to, vectorstores. All retrievers implement a common method get_relevant_documents() (and its asynchronous variant aget_relevant_documents()).from langchain.retrievers import SVMRetrieversvm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())docs_svm=svm_retriever.get_relevant_documents(question)len(docs_svm) 4Some common ways to improve on vector similarity search include:MultiQueryRetriever generates variants of the input
4,313
generates variants of the input question to improve retrieval.Max marginal relevance selects for relevance and diversity among the retrieved documents.Documents can be filtered during retrieval using metadata filters.import loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverlogging.basicConfig()logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=ChatOpenAI(temperature=0))unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs)In addition, a useful concept for improving retrieval is decoupling the documents from the embedded search key.For example, we can embed a document summary or question that are likely to lead to the document being retrieved.See details in here on the multi-vector retriever for this purpose.Step 5. Generate‚ÄãDistill the retrieved documents into an answer using an LLM/Chat model (e.g., gpt-3.5-turbo).We use the Runnable protocol to define the chain.Runnable protocol pipes together components in a transparent way.We used a prompt for RAG that is checked into the LangChain prompt hub (here).from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Go deeper‚ÄãChoosing LLMs‚ÄãBrowse the > 90 LLM and chat model integrations here.See further documentation on LLMs and chat models here.See a guide on local LLMS here.Customizing the prompt‚ÄãAs shown
Open In Colab
Open In Colab ->: generates variants of the input question to improve retrieval.Max marginal relevance selects for relevance and diversity among the retrieved documents.Documents can be filtered during retrieval using metadata filters.import loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverlogging.basicConfig()logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=ChatOpenAI(temperature=0))unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs)In addition, a useful concept for improving retrieval is decoupling the documents from the embedded search key.For example, we can embed a document summary or question that are likely to lead to the document being retrieved.See details in here on the multi-vector retriever for this purpose.Step 5. Generate‚ÄãDistill the retrieved documents into an answer using an LLM/Chat model (e.g., gpt-3.5-turbo).We use the Runnable protocol to define the chain.Runnable protocol pipes together components in a transparent way.We used a prompt for RAG that is checked into the LangChain prompt hub (here).from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Go deeper‚ÄãChoosing LLMs‚ÄãBrowse the > 90 LLM and chat model integrations here.See further documentation on LLMs and chat models here.See a guide on local LLMS here.Customizing the prompt‚ÄãAs shown
4,314
local LLMS here.Customizing the prompt​As shown above, we can load prompts (e.g., this RAG prompt) from the prompt hub.The prompt can also be easily customized, as shown below.from langchain.prompts import PromptTemplatetemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""rag_prompt_custom = PromptTemplate.from_template(template)rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a complicated task into smaller, more manageable subtasks or steps. It can be done using prompts, task-specific instructions, or human inputs. Thanks for asking!')We can use LangSmith to see the trace.PreviousSQLNextRAG over codeUse caseOverviewQuickstartStep 1. LoadGo deeperStep 2. SplitGo deeperStep 3. StoreGo deeperStep 4. RetrieveGo deeperStep 5. GenerateGo deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: local LLMS here.Customizing the prompt​As shown above, we can load prompts (e.g., this RAG prompt) from the prompt hub.The prompt can also be easily customized, as shown below.from langchain.prompts import PromptTemplatetemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""rag_prompt_custom = PromptTemplate.from_template(template)rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a complicated task into smaller, more manageable subtasks or steps. It can be done using prompts, task-specific instructions, or human inputs. Thanks for asking!')We can use LangSmith to see the trace.PreviousSQLNextRAG over codeUse caseOverviewQuickstartStep 1. LoadGo deeperStep 2. SplitGo deeperStep 3. StoreGo deeperStep 4. RetrieveGo deeperStep 5. GenerateGo deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,315
Output parsers | 🦜️🔗 Langchain
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. ->: Output parsers | 🦜️🔗 Langchain
4,316
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted."Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get started​Below we go over the main type of output parser, the PydanticOutputParser.from typing import Listfrom langchain.llms import OpenAIfrom langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain.pydantic_v1 import BaseModel, Field, validatormodel = OpenAI(model_name='text-davinci-003', temperature=0.0)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke")
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted."Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get started​Below we go over the main type of output parser, the PydanticOutputParser.from typing import Listfrom langchain.llms import OpenAIfrom langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain.pydantic_v1 import BaseModel, Field, validatormodel = OpenAI(model_name='text-davinci-003', temperature=0.0)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke")
4,317
Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()})# And a query intended to prompt a language model to populate the data structure.prompt_and_model = prompt | modeloutput = prompt_and_model.invoke({"query": "Tell me a joke."})parser.invoke(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')LCEL‚ÄãOutput parsers implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.Output parsers accept a string or BaseMessage as input and can return an arbitrary type.parser.invoke(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')Instead of manually invoking the parser, we also could've just added it to our Runnable sequence:chain = prompt | model | parserchain.invoke({"query": "Tell me a joke."}) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.The SimpleJsonOutputParser for example can stream through partial outputs:from
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. ->: Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()})# And a query intended to prompt a language model to populate the data structure.prompt_and_model = prompt | modeloutput = prompt_and_model.invoke({"query": "Tell me a joke."})parser.invoke(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')LCEL‚ÄãOutput parsers implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.Output parsers accept a string or BaseMessage as input and can return an arbitrary type.parser.invoke(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')Instead of manually invoking the parser, we also could've just added it to our Runnable sequence:chain = prompt | model | parserchain.invoke({"query": "Tell me a joke."}) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.The SimpleJsonOutputParser for example can stream through partial outputs:from
4,318
example can stream through partial outputs:from langchain.output_parsers.json import SimpleJsonOutputParserjson_prompt = PromptTemplate.from_template("Return a JSON object with an `answer` key that answers the following question: {question}")json_parser = SimpleJsonOutputParser()json_chain = json_prompt | model | json_parserlist(json_chain.stream({"question": "Who invented the microscope?"})) [{}, {'answer': ''}, {'answer': 'Ant'}, {'answer': 'Anton'}, {'answer': 'Antonie'}, {'answer': 'Antonie van'}, {'answer': 'Antonie van Lee'}, {'answer': 'Antonie van Leeu'}, {'answer': 'Antonie van Leeuwen'}, {'answer': 'Antonie van Leeuwenho'}, {'answer': 'Antonie van Leeuwenhoek'}]While the PydanticOutputParser cannot:list(chain.stream({"query": "Tell me a joke."})) [Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')]PreviousStreamingNextList parserGet startedLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. ->: example can stream through partial outputs:from langchain.output_parsers.json import SimpleJsonOutputParserjson_prompt = PromptTemplate.from_template("Return a JSON object with an `answer` key that answers the following question: {question}")json_parser = SimpleJsonOutputParser()json_chain = json_prompt | model | json_parserlist(json_chain.stream({"question": "Who invented the microscope?"})) [{}, {'answer': ''}, {'answer': 'Ant'}, {'answer': 'Anton'}, {'answer': 'Antonie'}, {'answer': 'Antonie van'}, {'answer': 'Antonie van Lee'}, {'answer': 'Antonie van Leeu'}, {'answer': 'Antonie van Leeuwen'}, {'answer': 'Antonie van Leeuwenho'}, {'answer': 'Antonie van Leeuwenhoek'}]While the PydanticOutputParser cannot:list(chain.stream({"query": "Tell me a joke."})) [Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')]PreviousStreamingNextList parserGet startedLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,319
Pydantic (JSON) parser | 🦜️🔗 Langchain
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. ->: Pydantic (JSON) parser | 🦜️🔗 Langchain
4,320
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersPydantic (JSON) parserPydantic (JSON) parserThis output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie's ability already drops off dramatically. Use Pydantic to declare your data model. Pydantic's BaseModel is like a Python dataclass, but with actual type checking + coercion.from typing import Listfrom langchain.llms import OpenAIfrom langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain.pydantic_v1 import BaseModel, Field, validatormodel_name = "text-davinci-003"temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersPydantic (JSON) parserPydantic (JSON) parserThis output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie's ability already drops off dramatically. Use Pydantic to declare your data model. Pydantic's BaseModel is like a Python dataclass, but with actual type checking + coercion.from typing import Listfrom langchain.llms import OpenAIfrom langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain.pydantic_v1 import BaseModel, Field, validatormodel_name = "text-davinci-003"temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into
4,321
Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')# Here's another example, but with a compound typed field.class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in")actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=actor_query)output = model(_input.to_string())parser.parse(output) Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])PreviousAuto-fixing parserNextRetry parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema.
This output parser allows users to specify an arbitrary JSON schema and query LLMs for JSON outputs that conform to that schema. ->: Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')# Here's another example, but with a compound typed field.class Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in")actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(query=actor_query)output = model(_input.to_string())parser.parse(output) Actor(name='Tom Hanks', film_names=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Cast Away', 'Toy Story'])PreviousAuto-fixing parserNextRetry parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,322
Retry parser | 🦜️🔗 Langchain
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. ->: Retry parser | 🦜️🔗 Langchain
4,323
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersRetry parserRetry parserWhile in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import ( PydanticOutputParser, OutputFixingParser, RetryOutputParser,)from pydantic import BaseModel, Field, validatorfrom typing import Listtemplate = """Based on the user question, provide an Action and Action Input for what step should be taken.{format_instructions}Question: {query}Response:"""class Action(BaseModel): action: str = Field(description="action to take") action_input: str = Field(description="input to the action")parser = PydanticOutputParser(pydantic_object=Action)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")bad_response = '{"action": "search"}'If we try to parse this response as is, we will get an error:parser.parse(bad_response)
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersRetry parserRetry parserWhile in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import ( PydanticOutputParser, OutputFixingParser, RetryOutputParser,)from pydantic import BaseModel, Field, validatorfrom typing import Listtemplate = """Based on the user question, provide an Action and Action Input for what step should be taken.{format_instructions}Question: {query}Response:"""class Action(BaseModel): action: str = Field(description="action to take") action_input: str = Field(description="input to the action")parser = PydanticOutputParser(pydantic_object=Action)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")bad_response = '{"action": "search"}'If we try to parse this response as is, we will get an error:parser.parse(bad_response)
4,324
will get an error:parser.parse(bad_response) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text) 23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Action action_input field required (type=value_error.missing) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(bad_response) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action action_input field required (type=value_error.missing)If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn't know what to actually put for action input.fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())fix_parser.parse(bad_response) Action(action='search', action_input='')Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.from
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. ->: will get an error:parser.parse(bad_response) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text) 23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Action action_input field required (type=value_error.missing) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(bad_response) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action action_input field required (type=value_error.missing)If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn't know what to actually put for action input.fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())fix_parser.parse(bad_response) Action(action='search', action_input='')Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.from
4,325
to try again to get a better response.from langchain.output_parsers import RetryWithErrorOutputParserretry_parser = RetryWithErrorOutputParser.from_llm( parser=parser, llm=OpenAI(temperature=0))retry_parser.parse_with_prompt(bad_response, prompt_value) Action(action='search', action_input='who is leo di caprios gf?')PreviousPydantic (JSON) parserNextStructured output parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. ->: to try again to get a better response.from langchain.output_parsers import RetryWithErrorOutputParserretry_parser = RetryWithErrorOutputParser.from_llm( parser=parser, llm=OpenAI(temperature=0))retry_parser.parse_with_prompt(bad_response, prompt_value) Action(action='search', action_input='who is leo di caprios gf?')PreviousPydantic (JSON) parserNextStructured output parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,326
Auto-fixing parser | 🦜️🔗 Langchain
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors. ->: Auto-fixing parser | 🦜️🔗 Langchain
4,327
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the above Pydantic output parser. Here's what happens if we pass it a result that does not comply with the schema:from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listclass Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in")actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"parser.parse(misformatted) --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text) 22 json_str =
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersAuto-fixing parserAuto-fixing parserThis output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.For this example, we'll use the above Pydantic output parser. Here's what happens if we pass it a result that does not comply with the schema:from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listclass Actor(BaseModel): name: str = Field(description="name of an actor") film_names: List[str] = Field(description="list of names of films they starred in")actor_query = "Generate the filmography for a random actor."parser = PydanticOutputParser(pydantic_object=Actor)misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"parser.parse(misformatted) --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text) 22 json_str =
4,328
text) 22 json_str = match.group() ---> 23 json_object = json.loads(json_str) 24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)Now we can construct and use a OutputFixingParser. This output
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors. ->: text) 22 json_str = match.group() ---> 23 json_object = json.loads(json_str) 24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is None and 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w) 333 """Return the Python representation of ``s`` (a ``str`` instance 334 containing a JSON document). 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx) 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) During handling of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(msg) OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)Now we can construct and use a OutputFixingParser. This output
4,329
and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.from langchain.output_parsers import OutputFixingParsernew_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())new_parser.parse(misformatted) Actor(name='Tom Hanks', film_names=['Forrest Gump'])PreviousEnum parserNextPydantic (JSON) parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors. ->: and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.from langchain.output_parsers import OutputFixingParsernew_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())new_parser.parse(misformatted) Actor(name='Tom Hanks', film_names=['Forrest Gump'])PreviousEnum parserNextPydantic (JSON) parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,330
List parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.from langchain.output_parsers import CommaSeparatedListOutputParserfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIoutput_parser = CommaSeparatedListOutputParser()format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="List five {subject}.\n{format_instructions}", input_variables=["subject"], partial_variables={"format_instructions": format_instructions})model = OpenAI(temperature=0)_input = prompt.format(subject="ice cream flavors")output = model(_input)output_parser.parse(output)The resulting output will be: ['Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream']PreviousOutput parsersNextDatetime parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This output parser can be used when you want to return a list of comma-separated items.
This output parser can be used when you want to return a list of comma-separated items. ->: List parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersList parserList parserThis output parser can be used when you want to return a list of comma-separated items.from langchain.output_parsers import CommaSeparatedListOutputParserfrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIoutput_parser = CommaSeparatedListOutputParser()format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="List five {subject}.\n{format_instructions}", input_variables=["subject"], partial_variables={"format_instructions": format_instructions})model = OpenAI(temperature=0)_input = prompt.format(subject="ice cream flavors")output = model(_input)output_parser.parse(output)The resulting output will be: ['Vanilla', 'Chocolate', 'Strawberry', 'Mint Chocolate Chip', 'Cookies and Cream']PreviousOutput parsersNextDatetime parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,331
Structured output parser | 🦜️🔗 Langchain
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only. ->: Structured output parser | 🦜️🔗 Langchain
4,332
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.from langchain.output_parsers import StructuredOutputParser, ResponseSchemafrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIHere we define the response schema we want to receive.response_schemas = [ ResponseSchema(name="answer", description="answer to the user's question"), ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")]output_parser = StructuredOutputParser.from_response_schemas(response_schemas)We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="answer the users question as best as possible.\n{format_instructions}\n{question}", input_variables=["question"], partial_variables={"format_instructions": format_instructions})We can now use this to format a prompt to send to the language model, and then parse the returned result.model = OpenAI(temperature=0)_input = prompt.format_prompt(question="what's the capital of france?")output =
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersStructured output parserStructured output parserThis output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.from langchain.output_parsers import StructuredOutputParser, ResponseSchemafrom langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIHere we define the response schema we want to receive.response_schemas = [ ResponseSchema(name="answer", description="answer to the user's question"), ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")]output_parser = StructuredOutputParser.from_response_schemas(response_schemas)We now get a string that contains instructions for how the response should be formatted, and we then insert that into our prompt.format_instructions = output_parser.get_format_instructions()prompt = PromptTemplate( template="answer the users question as best as possible.\n{format_instructions}\n{question}", input_variables=["question"], partial_variables={"format_instructions": format_instructions})We can now use this to format a prompt to send to the language model, and then parse the returned result.model = OpenAI(temperature=0)_input = prompt.format_prompt(question="what's the capital of france?")output =
4,333
the capital of france?")output = model(_input.to_string())output_parser.parse(output) {'answer': 'Paris', 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}And here's an example of using this in a chat modelchat_model = ChatOpenAI(temperature=0)prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions})_input = prompt.format_prompt(question="what's the capital of france?")output = chat_model(_input.to_messages())output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}PreviousRetry parserNextXML parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only.
This output parser can be used when you want to return multiple fields. While the Pydantic/JSON parser is more powerful, we initially experimented with data structures having text fields only. ->: the capital of france?")output = model(_input.to_string())output_parser.parse(output) {'answer': 'Paris', 'source': 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'}And here's an example of using this in a chat modelchat_model = ChatOpenAI(temperature=0)prompt = ChatPromptTemplate( messages=[ HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}") ], input_variables=["question"], partial_variables={"format_instructions": format_instructions})_input = prompt.format_prompt(question="what's the capital of france?")output = chat_model(_input.to_messages())output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}PreviousRetry parserNextXML parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,334
XML parser | 🦜️🔗 Langchain
This output parser allows users to obtain results from LLM in the popular XML format.
This output parser allows users to obtain results from LLM in the popular XML format. ->: XML parser | 🦜️🔗 Langchain
4,335
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersXML parserXML parserThis output parser allows users to obtain results from LLM in the popular XML format. Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed XML. In the following example we use Claude model (https://docs.anthropic.com/claude/docs) which works really well with XML tags.from langchain.prompts import PromptTemplatefrom langchain.llms import Anthropicfrom langchain.output_parsers import XMLOutputParsermodel = Anthropic(model="claude-2", max_tokens_to_sample=512, temperature=0.1) /Users/harrisonchase/workplace/langchain/libs/langchain/langchain/llms/anthropic.py:171: UserWarning: This Anthropic LLM is deprecated. Please use `from langchain.chat_models import ChatAnthropic` instead warnings.warn(Let's start with the simple request to the model.actor_query = "Generate the shortened filmography for Tom Hanks."output = model( f"""Human:{actor_query}Please enclose the movies in <movie></movie> tagsAssistant:""")print(output) Here is the shortened filmography for Tom Hanks enclosed in <movie> tags: <movie>Splash (1984)</movie> <movie>Big (1988)</movie> <movie>A League of Their Own (1992)</movie> <movie>Sleepless in Seattle (1993)</movie> <movie>Forrest Gump (1994)</movie> <movie>Apollo 13 (1995)</movie> <movie>Toy Story (1995)</movie> <movie>Saving Private Ryan (1998)</movie> <movie>Cast Away
This output parser allows users to obtain results from LLM in the popular XML format.
This output parser allows users to obtain results from LLM in the popular XML format. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersXML parserXML parserThis output parser allows users to obtain results from LLM in the popular XML format. Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed XML. In the following example we use Claude model (https://docs.anthropic.com/claude/docs) which works really well with XML tags.from langchain.prompts import PromptTemplatefrom langchain.llms import Anthropicfrom langchain.output_parsers import XMLOutputParsermodel = Anthropic(model="claude-2", max_tokens_to_sample=512, temperature=0.1) /Users/harrisonchase/workplace/langchain/libs/langchain/langchain/llms/anthropic.py:171: UserWarning: This Anthropic LLM is deprecated. Please use `from langchain.chat_models import ChatAnthropic` instead warnings.warn(Let's start with the simple request to the model.actor_query = "Generate the shortened filmography for Tom Hanks."output = model( f"""Human:{actor_query}Please enclose the movies in <movie></movie> tagsAssistant:""")print(output) Here is the shortened filmography for Tom Hanks enclosed in <movie> tags: <movie>Splash (1984)</movie> <movie>Big (1988)</movie> <movie>A League of Their Own (1992)</movie> <movie>Sleepless in Seattle (1993)</movie> <movie>Forrest Gump (1994)</movie> <movie>Apollo 13 (1995)</movie> <movie>Toy Story (1995)</movie> <movie>Saving Private Ryan (1998)</movie> <movie>Cast Away
4,336
Private Ryan (1998)</movie> <movie>Cast Away (2000)</movie> <movie>The Da Vinci Code (2006)</movie> <movie>Toy Story 3 (2010)</movie> <movie>Captain Phillips (2013)</movie> <movie>Bridge of Spies (2015)</movie> <movie>Toy Story 4 (2019)</movie>Now we will use the XMLOutputParser in order to get the structured output.parser = XMLOutputParser()prompt = PromptTemplate( template=""" Human: {query} {format_instructions} Assistant:""", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parseroutput = chain.invoke({"query": actor_query})print(output) {'filmography': [{'movie': [{'title': 'Splash'}, {'year': '1984'}]}, {'movie': [{'title': 'Big'}, {'year': '1988'}]}, {'movie': [{'title': 'A League of Their Own'}, {'year': '1992'}]}, {'movie': [{'title': 'Sleepless in Seattle'}, {'year': '1993'}]}, {'movie': [{'title': 'Forrest Gump'}, {'year': '1994'}]}, {'movie': [{'title': 'Toy Story'}, {'year': '1995'}]}, {'movie': [{'title': 'Apollo 13'}, {'year': '1995'}]}, {'movie': [{'title': 'Saving Private Ryan'}, {'year': '1998'}]}, {'movie': [{'title': 'Cast Away'}, {'year': '2000'}]}, {'movie': [{'title': 'Catch Me If You Can'}, {'year': '2002'}]}, {'movie': [{'title': 'The Polar Express'}, {'year': '2004'}]}, {'movie': [{'title': 'Bridge of Spies'}, {'year': '2015'}]}]}Finally, let's add some tags to tailor the output to our needs.parser = XMLOutputParser(tags=["movies", "actor", "film", "name", "genre"])prompt = PromptTemplate( template=""" Human: {query} {format_instructions} Assistant:""", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parseroutput = chain.invoke({"query": actor_query})print(output) {'movies': [{'actor': [{'name': 'Tom Hanks'}, {'film': [{'name': 'Splash'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'Big'}, {'genre':
This output parser allows users to obtain results from LLM in the popular XML format.
This output parser allows users to obtain results from LLM in the popular XML format. ->: Private Ryan (1998)</movie> <movie>Cast Away (2000)</movie> <movie>The Da Vinci Code (2006)</movie> <movie>Toy Story 3 (2010)</movie> <movie>Captain Phillips (2013)</movie> <movie>Bridge of Spies (2015)</movie> <movie>Toy Story 4 (2019)</movie>Now we will use the XMLOutputParser in order to get the structured output.parser = XMLOutputParser()prompt = PromptTemplate( template=""" Human: {query} {format_instructions} Assistant:""", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parseroutput = chain.invoke({"query": actor_query})print(output) {'filmography': [{'movie': [{'title': 'Splash'}, {'year': '1984'}]}, {'movie': [{'title': 'Big'}, {'year': '1988'}]}, {'movie': [{'title': 'A League of Their Own'}, {'year': '1992'}]}, {'movie': [{'title': 'Sleepless in Seattle'}, {'year': '1993'}]}, {'movie': [{'title': 'Forrest Gump'}, {'year': '1994'}]}, {'movie': [{'title': 'Toy Story'}, {'year': '1995'}]}, {'movie': [{'title': 'Apollo 13'}, {'year': '1995'}]}, {'movie': [{'title': 'Saving Private Ryan'}, {'year': '1998'}]}, {'movie': [{'title': 'Cast Away'}, {'year': '2000'}]}, {'movie': [{'title': 'Catch Me If You Can'}, {'year': '2002'}]}, {'movie': [{'title': 'The Polar Express'}, {'year': '2004'}]}, {'movie': [{'title': 'Bridge of Spies'}, {'year': '2015'}]}]}Finally, let's add some tags to tailor the output to our needs.parser = XMLOutputParser(tags=["movies", "actor", "film", "name", "genre"])prompt = PromptTemplate( template=""" Human: {query} {format_instructions} Assistant:""", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parseroutput = chain.invoke({"query": actor_query})print(output) {'movies': [{'actor': [{'name': 'Tom Hanks'}, {'film': [{'name': 'Splash'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'Big'}, {'genre':
4,337
'Comedy'}]}, {'film': [{'name': 'Big'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'A League of Their Own'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'Sleepless in Seattle'}, {'genre': 'Romance'}]}, {'film': [{'name': 'Forrest Gump'}, {'genre': 'Drama'}]}, {'film': [{'name': 'Toy Story'}, {'genre': 'Animation'}]}, {'film': [{'name': 'Apollo 13'}, {'genre': 'Drama'}]}, {'film': [{'name': 'Saving Private Ryan'}, {'genre': 'War'}]}, {'film': [{'name': 'Cast Away'}, {'genre': 'Adventure'}]}, {'film': [{'name': 'The Green Mile'}, {'genre': 'Drama'}]}]}]}PreviousStructured output parserNextRetrievalCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This output parser allows users to obtain results from LLM in the popular XML format.
This output parser allows users to obtain results from LLM in the popular XML format. ->: 'Comedy'}]}, {'film': [{'name': 'Big'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'A League of Their Own'}, {'genre': 'Comedy'}]}, {'film': [{'name': 'Sleepless in Seattle'}, {'genre': 'Romance'}]}, {'film': [{'name': 'Forrest Gump'}, {'genre': 'Drama'}]}, {'film': [{'name': 'Toy Story'}, {'genre': 'Animation'}]}, {'film': [{'name': 'Apollo 13'}, {'genre': 'Drama'}]}, {'film': [{'name': 'Saving Private Ryan'}, {'genre': 'War'}]}, {'film': [{'name': 'Cast Away'}, {'genre': 'Adventure'}]}, {'film': [{'name': 'The Green Mile'}, {'genre': 'Drama'}]}]}]}PreviousStructured output parserNextRetrievalCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,338
Enum parser | 🦜️🔗 Langchain
This notebook shows how to use an Enum output parser.
This notebook shows how to use an Enum output parser. ->: Enum parser | 🦜️🔗 Langchain
4,339
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersEnum parserEnum parserThis notebook shows how to use an Enum output parser.from langchain.output_parsers.enum import EnumOutputParserfrom enum import Enumclass Colors(Enum): RED = "red" GREEN = "green" BLUE = "blue"parser = EnumOutputParser(enum=Colors)parser.parse("red") <Colors.RED: 'red'># Can handle spacesparser.parse(" green") <Colors.GREEN: 'green'># And new linesparser.parse("blue\n") <Colors.BLUE: 'blue'># And raises errors when appropriateparser.parse("yellow") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response) 24 try: ---> 25 return self.enum(response.strip()) 26 except ValueError: File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start) 314 if names is None: # simple value lookup --> 315 return cls.__new__(cls, value) 316 # otherwise, functional API: we're creating a new Enum type File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value) 610 if result is None and exc is None: --> 611 raise ve_exc 612 elif exc is None: ValueError: 'yellow' is not a valid Colors During handling of the above exception, another exception
This notebook shows how to use an Enum output parser.
This notebook shows how to use an Enum output parser. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersEnum parserEnum parserThis notebook shows how to use an Enum output parser.from langchain.output_parsers.enum import EnumOutputParserfrom enum import Enumclass Colors(Enum): RED = "red" GREEN = "green" BLUE = "blue"parser = EnumOutputParser(enum=Colors)parser.parse("red") <Colors.RED: 'red'># Can handle spacesparser.parse(" green") <Colors.GREEN: 'green'># And new linesparser.parse("blue\n") <Colors.BLUE: 'blue'># And raises errors when appropriateparser.parse("yellow") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/workplace/langchain/langchain/output_parsers/enum.py:25, in EnumOutputParser.parse(self, response) 24 try: ---> 25 return self.enum(response.strip()) 26 except ValueError: File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315, in EnumMeta.__call__(cls, value, names, module, qualname, type, start) 314 if names is None: # simple value lookup --> 315 return cls.__new__(cls, value) 316 # otherwise, functional API: we're creating a new Enum type File ~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611, in Enum.__new__(cls, value) 610 if result is None and exc is None: --> 611 raise ve_exc 612 elif exc is None: ValueError: 'yellow' is not a valid Colors During handling of the above exception, another exception
4,340
of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[8], line 2 1 # And raises errors when appropriate ----> 2 parser.parse("yellow") File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response) 25 return self.enum(response.strip()) 26 except ValueError: ---> 27 raise OutputParserException( 28 f"Response '{response}' is not one of the " 29 f"expected values: {self._valid_values}" 30 ) OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']PreviousDatetime parserNextAuto-fixing parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use an Enum output parser.
This notebook shows how to use an Enum output parser. ->: of the above exception, another exception occurred: OutputParserException Traceback (most recent call last) Cell In[8], line 2 1 # And raises errors when appropriate ----> 2 parser.parse("yellow") File ~/workplace/langchain/langchain/output_parsers/enum.py:27, in EnumOutputParser.parse(self, response) 25 return self.enum(response.strip()) 26 except ValueError: ---> 27 raise OutputParserException( 28 f"Response '{response}' is not one of the " 29 f"expected values: {self._valid_values}" 30 ) OutputParserException: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']PreviousDatetime parserNextAuto-fixing parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,341
Datetime parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersDatetime parserDatetime parserThis OutputParser can be used to parse LLM output into datetime format.from langchain.prompts import PromptTemplatefrom langchain.output_parsers import DatetimeOutputParserfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIoutput_parser = DatetimeOutputParser()template = """Answer the users question:{question}{format_instructions}"""prompt = PromptTemplate.from_template( template, partial_variables={"format_instructions": output_parser.get_format_instructions()},)chain = LLMChain(prompt=prompt, llm=OpenAI())output = chain.run("around when was bitcoin founded?")output '\n\n2008-01-03T18:15:05.000000Z'output_parser.parse(output) datetime.datetime(2008, 1, 3, 18, 15, 5)PreviousList parserNextEnum parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This OutputParser can be used to parse LLM output into datetime format.
This OutputParser can be used to parse LLM output into datetime format. ->: Datetime parser | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserXML parserRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OOutput parsersDatetime parserDatetime parserThis OutputParser can be used to parse LLM output into datetime format.from langchain.prompts import PromptTemplatefrom langchain.output_parsers import DatetimeOutputParserfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIoutput_parser = DatetimeOutputParser()template = """Answer the users question:{question}{format_instructions}"""prompt = PromptTemplate.from_template( template, partial_variables={"format_instructions": output_parser.get_format_instructions()},)chain = LLMChain(prompt=prompt, llm=OpenAI())output = chain.run("around when was bitcoin founded?")output '\n\n2008-01-03T18:15:05.000000Z'output_parser.parse(output) datetime.datetime(2008, 1, 3, 18, 15, 5)PreviousList parserNextEnum parserCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,342
SQL | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: SQL | 🦜️🔗 Langchain
4,343
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingQA over structured dataSQLOn this pageSQLUse case​Enterprise data is often stored in SQL databases.LLMs make it possible to interact with SQL databases using natural language.LangChain offers SQL Chains and Agents to build and run SQL queries based on natural language prompts. These are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite).They enable use cases such as:Generating queries that will be run based on natural language questionsCreating chatbots that can answer questions based on database dataBuilding custom dashboards based on insights a user wants to analyzeOverview​LangChain provides tools to interact with SQL Databases:Build SQL queries based on natural language user questionsQuery a SQL database using chains for query creation and executionInteract with a SQL database using agents for robust and flexible querying Quickstart​First, get required packages and set environment variables:pip install langchain langchain-experimental openai# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()The below example will use a SQLite connection with Chinook database. Follow installation steps to create Chinook.db in the same directory as this notebook:Save this file to the directory as Chinook_Sqlite.sqlRun sqlite3 Chinook.dbRun .read Chinook_Sqlite.sqlTest SELECT * FROM Artist LIMIT 10;Now, Chinhook.db is in our directory.Let's create a SQLDatabaseChain to create and execute SQL queries.from langchain.utilities import SQLDatabasefrom langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm =
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingQA over structured dataSQLOn this pageSQLUse case​Enterprise data is often stored in SQL databases.LLMs make it possible to interact with SQL databases using natural language.LangChain offers SQL Chains and Agents to build and run SQL queries based on natural language prompts. These are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite).They enable use cases such as:Generating queries that will be run based on natural language questionsCreating chatbots that can answer questions based on database dataBuilding custom dashboards based on insights a user wants to analyzeOverview​LangChain provides tools to interact with SQL Databases:Build SQL queries based on natural language user questionsQuery a SQL database using chains for query creation and executionInteract with a SQL database using agents for robust and flexible querying Quickstart​First, get required packages and set environment variables:pip install langchain langchain-experimental openai# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()The below example will use a SQLite connection with Chinook database. Follow installation steps to create Chinook.db in the same directory as this notebook:Save this file to the directory as Chinook_Sqlite.sqlRun sqlite3 Chinook.dbRun .read Chinook_Sqlite.sqlTest SELECT * FROM Artist LIMIT 10;Now, Chinhook.db is in our directory.Let's create a SQLDatabaseChain to create and execute SQL queries.from langchain.utilities import SQLDatabasefrom langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm =
4,344
SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'Note that this both creates and executes the query. In the following sections, we will cover the 3 different use cases mentioned in the overview.Go deeper‚ÄãYou can load tabular data from other sources other than SQL Databases.
Open In Colab
Open In Colab ->: SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'Note that this both creates and executes the query. In the following sections, we will cover the 3 different use cases mentioned in the overview.Go deeper‚ÄãYou can load tabular data from other sources other than SQL Databases.
4,345
For example:Loading a CSV fileLoading a Pandas DataFrame Here you can check full list of Document LoadersCase 1: Text-to-SQL query‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import create_sql_query_chainLet's create the chain that will build the SQL Query:chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)response = chain.invoke({"question":"How many employees are there"})print(response) SELECT COUNT(*) FROM EmployeeAfter building the SQL query based on a user question, we can execute the query:db.run(response) '[(8,)]'As we can see, the SQL Query Builder chain only created the query, and we handled the query execution separately.Go deeper‚ÄãLooking under the hoodWe can look at the LangSmith trace to unpack this:Some papers have reported good performance when prompting with:A CREATE TABLE description for each table, which include column names, their types, etcFollowed by three example rows in a SELECT statementcreate_sql_query_chain adopts this the best practice (see more in this blog).
Open In Colab
Open In Colab ->: For example:Loading a CSV fileLoading a Pandas DataFrame Here you can check full list of Document LoadersCase 1: Text-to-SQL query‚Äãfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import create_sql_query_chainLet's create the chain that will build the SQL Query:chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)response = chain.invoke({"question":"How many employees are there"})print(response) SELECT COUNT(*) FROM EmployeeAfter building the SQL query based on a user question, we can execute the query:db.run(response) '[(8,)]'As we can see, the SQL Query Builder chain only created the query, and we handled the query execution separately.Go deeper‚ÄãLooking under the hoodWe can look at the LangSmith trace to unpack this:Some papers have reported good performance when prompting with:A CREATE TABLE description for each table, which include column names, their types, etcFollowed by three example rows in a SELECT statementcreate_sql_query_chain adopts this the best practice (see more in this blog).
4,346
ImprovementsThe query builder can be improved in several ways, such as (but not limited to):Customizing database description to your specific use caseHardcoding a few examples of questions and their corresponding SQL query in the promptUsing a vector database to include dynamic examples that are relevant to the specific user questionAll these examples involve customizing the chain's prompt. For example, we can include a few examples in our prompt like so:from langchain.prompts import PromptTemplateTEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}.Some examples of SQL queries that correspond to questions are:{few_shot_examples}Question: {input}"""CUSTOM_PROMPT = PromptTemplate( input_variables=["input", "few_shot_examples", "table_info", "dialect"], template=TEMPLATE)We can also access this prompt in the LangChain prompt hub.This will work with your LangSmith API key.from langchain import hubCUSTOM_PROMPT = hub.pull("rlm/text-to-sql")Case 2: Text-to-SQL query and execution‚ÄãWe can use SQLDatabaseChain from langchain_experimental to create and run SQL queries.from langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChainllm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'As we can see, we get the same result as the previous case.Here, the chain also handles the query execution and provides a final answer based on the user question
Open In Colab
Open In Colab ->: ImprovementsThe query builder can be improved in several ways, such as (but not limited to):Customizing database description to your specific use caseHardcoding a few examples of questions and their corresponding SQL query in the promptUsing a vector database to include dynamic examples that are relevant to the specific user questionAll these examples involve customizing the chain's prompt. For example, we can include a few examples in our prompt like so:from langchain.prompts import PromptTemplateTEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}.Some examples of SQL queries that correspond to questions are:{few_shot_examples}Question: {input}"""CUSTOM_PROMPT = PromptTemplate( input_variables=["input", "few_shot_examples", "table_info", "dialect"], template=TEMPLATE)We can also access this prompt in the LangChain prompt hub.This will work with your LangSmith API key.from langchain import hubCUSTOM_PROMPT = hub.pull("rlm/text-to-sql")Case 2: Text-to-SQL query and execution‚ÄãWe can use SQLDatabaseChain from langchain_experimental to create and run SQL queries.from langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChainllm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'As we can see, we get the same result as the previous case.Here, the chain also handles the query execution and provides a final answer based on the user question
4,347
a final answer based on the user question and the query result.Be careful while using this approach as it is susceptible to SQL Injection:The chain is executing queries that are created by an LLM, and weren't validatede.g. records may be created, modified or deleted unintentionally_This is why we see the SQLDatabaseChain is inside langchain_experimental.Go deeper‚ÄãLooking under the hoodWe can use the LangSmith trace to see what is happening under the hood:As discussed above, first we create the query:text: ' SELECT COUNT(*) FROM "Employee";'Then, it executes the query and passes the results to an LLM for synthesis.ImprovementsThe performance of the SQLDatabaseChain can be enhanced in several ways:Adding sample rowsSpecifying custom table informationUsing Query Checker self-correct invalid SQL using parameter use_query_checker=TrueCustomizing the LLM Prompt include specific instructions or relevant information, using parameter prompt=CUSTOM_PROMPTGet intermediate steps access the SQL statement as well as the final result using parameter return_intermediate_steps=TrueLimit the number of rows a query will return using parameter top_k=5You might find SQLDatabaseSequentialChain
Open In Colab
Open In Colab ->: a final answer based on the user question and the query result.Be careful while using this approach as it is susceptible to SQL Injection:The chain is executing queries that are created by an LLM, and weren't validatede.g. records may be created, modified or deleted unintentionally_This is why we see the SQLDatabaseChain is inside langchain_experimental.Go deeper‚ÄãLooking under the hoodWe can use the LangSmith trace to see what is happening under the hood:As discussed above, first we create the query:text: ' SELECT COUNT(*) FROM "Employee";'Then, it executes the query and passes the results to an LLM for synthesis.ImprovementsThe performance of the SQLDatabaseChain can be enhanced in several ways:Adding sample rowsSpecifying custom table informationUsing Query Checker self-correct invalid SQL using parameter use_query_checker=TrueCustomizing the LLM Prompt include specific instructions or relevant information, using parameter prompt=CUSTOM_PROMPTGet intermediate steps access the SQL statement as well as the final result using parameter return_intermediate_steps=TrueLimit the number of rows a query will return using parameter top_k=5You might find SQLDatabaseSequentialChain
4,348
useful for cases in which the number of tables in the database is large.This Sequential Chain handles the process of:Determining which tables to use based on the user questionCalling the normal SQL database chain using only relevant tablesAdding Sample RowsProviding sample data can help the LLM construct correct queries when the data format is not obvious. For example, we can tell LLM that artists are saved with their full names by providing two rows from the Track table.db = SQLDatabase.from_uri( "sqlite:///Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2)The sample rows are added to the prompt after each corresponding table's column information.We can use db.table_info and check which sample rows are included:print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */Case 3: SQL agents‚ÄãLangChain has an SQL Agent which provides a more flexible way of interacting with SQL Databases than the SQLDatabaseChain.The main advantages of using the SQL Agent are:It can answer questions based on the databases' schema as well as
Open In Colab
Open In Colab ->: useful for cases in which the number of tables in the database is large.This Sequential Chain handles the process of:Determining which tables to use based on the user questionCalling the normal SQL database chain using only relevant tablesAdding Sample RowsProviding sample data can help the LLM construct correct queries when the data format is not obvious. For example, we can tell LLM that artists are saved with their full names by providing two rows from the Track table.db = SQLDatabase.from_uri( "sqlite:///Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2)The sample rows are added to the prompt after each corresponding table's column information.We can use db.table_info and check which sample rows are included:print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */Case 3: SQL agents‚ÄãLangChain has an SQL Agent which provides a more flexible way of interacting with SQL Databases than the SQLDatabaseChain.The main advantages of using the SQL Agent are:It can answer questions based on the databases' schema as well as
4,349
based on the databases' schema as well as on the databases' content (like describing a specific table)It can recover from errors by running a generated query, catching the traceback and regenerating it correctlyTo initialize the agent, we use create_sql_agent function. This agent contains the SQLDatabaseToolkit which contains tools to: Create and execute queriesCheck query syntaxRetrieve table descriptions... and morefrom langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit# from langchain.agents import AgentExecutorfrom langchain.agents.agent_types import AgentTypedb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0)), verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Agent task example #1 - Running queries‚Äãagent_executor.run( "List the total sales per country. Which country's customers spent the most?") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: I should query the schema of the Invoice and Customer tables. Action: sql_db_schema Action Input: Invoice, Customer Observation: CREATE TABLE "Customer" ( "CustomerId" INTEGER NOT NULL, "FirstName" NVARCHAR(40) NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "Company" NVARCHAR(80), "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60) NOT NULL, "SupportRepId" INTEGER, PRIMARY KEY ("CustomerId"), FOREIGN KEY("SupportRepId") REFERENCES "Employee"
Open In Colab
Open In Colab ->: based on the databases' schema as well as on the databases' content (like describing a specific table)It can recover from errors by running a generated query, catching the traceback and regenerating it correctlyTo initialize the agent, we use create_sql_agent function. This agent contains the SQLDatabaseToolkit which contains tools to: Create and execute queriesCheck query syntaxRetrieve table descriptions... and morefrom langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit# from langchain.agents import AgentExecutorfrom langchain.agents.agent_types import AgentTypedb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0)), verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Agent task example #1 - Running queries‚Äãagent_executor.run( "List the total sales per country. Which country's customers spent the most?") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: I should query the schema of the Invoice and Customer tables. Action: sql_db_schema Action Input: Invoice, Customer Observation: CREATE TABLE "Customer" ( "CustomerId" INTEGER NOT NULL, "FirstName" NVARCHAR(40) NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "Company" NVARCHAR(80), "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60) NOT NULL, "SupportRepId" INTEGER, PRIMARY KEY ("CustomerId"), FOREIGN KEY("SupportRepId") REFERENCES "Employee"
4,350
FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId") ) /* 3 rows from Customer table: CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima, 2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie Köhler None Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 +49 0711 2842222 None leonekohler@surfeu.de 5 3 François Tremblay None 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 None ftremblay@gmail.com 3 */ CREATE TABLE "Invoice" ( "InvoiceId" INTEGER NOT NULL, "CustomerId" INTEGER NOT NULL, "InvoiceDate" DATETIME NOT NULL, "BillingAddress" NVARCHAR(70), "BillingCity" NVARCHAR(40), "BillingState" NVARCHAR(40), "BillingCountry" NVARCHAR(40), "BillingPostalCode" NVARCHAR(10), "Total" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("InvoiceId"), FOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId") ) /* 3 rows from Invoice table: InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 1.98 2 4 2009-01-02 00:00:00 Ullevålsveien 14 Oslo None Norway 0171 3.96 3 8 2009-01-03 00:00:00 Grétrystraat 63 Brussels None Belgium 1000 5.94 */ Thought: I should query the total sales per country. Action: sql_db_query Action Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10 Observation: [('USA',
Open In Colab
Open In Colab ->: FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId") ) /* 3 rows from Customer table: CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima, 2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie Köhler None Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 +49 0711 2842222 None leonekohler@surfeu.de 5 3 François Tremblay None 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 None ftremblay@gmail.com 3 */ CREATE TABLE "Invoice" ( "InvoiceId" INTEGER NOT NULL, "CustomerId" INTEGER NOT NULL, "InvoiceDate" DATETIME NOT NULL, "BillingAddress" NVARCHAR(70), "BillingCity" NVARCHAR(40), "BillingState" NVARCHAR(40), "BillingCountry" NVARCHAR(40), "BillingPostalCode" NVARCHAR(10), "Total" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("InvoiceId"), FOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId") ) /* 3 rows from Invoice table: InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 1.98 2 4 2009-01-02 00:00:00 Ullevålsveien 14 Oslo None Norway 0171 3.96 3 8 2009-01-03 00:00:00 Grétrystraat 63 Brussels None Belgium 1000 5.94 */ Thought: I should query the total sales per country. Action: sql_db_query Action Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10 Observation: [('USA',
4,351
TotalSales DESC LIMIT 10 Observation: [('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62)] Thought: I now know the final answer Final Answer: The country with the highest total sales is the USA, with a total of $523.06. > Finished chain. 'The country with the highest total sales is the USA, with a total of $523.06.'Looking at the LangSmith trace, we can see:The agent is using a ReAct style promptFirst, it will look at the tables: Action: sql_db_list_tables using tool sql_db_list_tablesGiven the tables as an observation, it thinks and then determinates the next action:Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, TrackThought: I should query the schema of the Invoice and Customer tables.Action: sql_db_schemaAction Input: Invoice, CustomerIt then formulates the query using the schema from tool sql_db_schemaThought: I should query the total sales per country.Action: sql_db_queryAction Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10It finally executes the generated query using tool sql_db_queryAgent task example #2 - Describing a Table‚Äãagent_executor.run("Describe the playlisttrack table") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: The PlaylistTrack table is the most relevant to the question. Action: sql_db_schema Action Input: PlaylistTrack Observation: CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER
Open In Colab
Open In Colab ->: TotalSales DESC LIMIT 10 Observation: [('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62)] Thought: I now know the final answer Final Answer: The country with the highest total sales is the USA, with a total of $523.06. > Finished chain. 'The country with the highest total sales is the USA, with a total of $523.06.'Looking at the LangSmith trace, we can see:The agent is using a ReAct style promptFirst, it will look at the tables: Action: sql_db_list_tables using tool sql_db_list_tablesGiven the tables as an observation, it thinks and then determinates the next action:Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, TrackThought: I should query the schema of the Invoice and Customer tables.Action: sql_db_schemaAction Input: Invoice, CustomerIt then formulates the query using the schema from tool sql_db_schemaThought: I should query the total sales per country.Action: sql_db_queryAction Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10It finally executes the generated query using tool sql_db_queryAgent task example #2 - Describing a Table‚Äãagent_executor.run("Describe the playlisttrack table") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: The PlaylistTrack table is the most relevant to the question. Action: sql_db_schema Action Input: PlaylistTrack Observation: CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER
4,352
"PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) /* 3 rows from PlaylistTrack table: PlaylistId TrackId 1 3402 1 3389 1 3390 */ Thought: I now know the final answer Final Answer: The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table. > Finished chain. 'The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table.'Extending the SQL Toolkit​Although the out-of-the-box SQL Toolkit contains the necessary tools to start working on a database, it is often the case that some extra tools may be useful for extending the agent's capabilities. This is particularly useful when trying to use domain specific knowledge in the solution, in order to improve its overall performance.Some examples include:Including dynamic few shot examplesFinding misspellings in proper nouns to use as column filtersWe can create separate tools which tackle these specific use cases and include them as a complement to the standard SQL Toolkit. Let's see how to include these two custom tools.Including dynamic few-shot examples​In order to include dynamic few-shot examples, we need a custom Retriever Tool that handles the vector database in order to retrieve the examples that are semantically similar to the user’s question.Let's start by creating a dictionary with some examples: # few_shots = {'List all artists.': 'SELECT * FROM artists;',# "Find all albums for the artist 'AC/DC'.": "SELECT * FROM albums WHERE ArtistId = (SELECT
Open In Colab
Open In Colab ->: "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) /* 3 rows from PlaylistTrack table: PlaylistId TrackId 1 3402 1 3389 1 3390 */ Thought: I now know the final answer Final Answer: The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table. > Finished chain. 'The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table.'Extending the SQL Toolkit​Although the out-of-the-box SQL Toolkit contains the necessary tools to start working on a database, it is often the case that some extra tools may be useful for extending the agent's capabilities. This is particularly useful when trying to use domain specific knowledge in the solution, in order to improve its overall performance.Some examples include:Including dynamic few shot examplesFinding misspellings in proper nouns to use as column filtersWe can create separate tools which tackle these specific use cases and include them as a complement to the standard SQL Toolkit. Let's see how to include these two custom tools.Including dynamic few-shot examples​In order to include dynamic few-shot examples, we need a custom Retriever Tool that handles the vector database in order to retrieve the examples that are semantically similar to the user’s question.Let's start by creating a dictionary with some examples: # few_shots = {'List all artists.': 'SELECT * FROM artists;',# "Find all albums for the artist 'AC/DC'.": "SELECT * FROM albums WHERE ArtistId = (SELECT
4,353
"SELECT * FROM albums WHERE ArtistId = (SELECT ArtistId FROM artists WHERE Name = 'AC/DC');",# "List all tracks in the 'Rock' genre.": "SELECT * FROM tracks WHERE GenreId = (SELECT GenreId FROM genres WHERE Name = 'Rock');",# 'Find the total duration of all tracks.': 'SELECT SUM(Milliseconds) FROM tracks;',# 'List all customers from Canada.': "SELECT * FROM customers WHERE Country = 'Canada';",# 'How many tracks are there in the album with ID 5?': 'SELECT COUNT(*) FROM tracks WHERE AlbumId = 5;',# 'Find the total number of invoices.': 'SELECT COUNT(*) FROM invoices;',# 'List all tracks that are longer than 5 minutes.': 'SELECT * FROM tracks WHERE Milliseconds > 300000;',# 'Who are the top 5 customers by total purchase?': 'SELECT CustomerId, SUM(Total) AS TotalPurchase FROM invoices GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;',# 'Which albums are from the year 2000?': "SELECT * FROM albums WHERE strftime('%Y', ReleaseDate) = '2000';",# 'How many employees are there': 'SELECT COUNT(*) FROM "employee"'# }We can then create a retriever using the list of questions, assigning the target SQL query as metadata:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import FAISSfrom langchain.schema import Documentembeddings = OpenAIEmbeddings()few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()]vector_db = FAISS.from_documents(few_shot_docs, embeddings)retriever = vector_db.as_retriever()Now we can create our own custom tool and append it as a new tool in the create_sql_agent function:from langchain.agents.agent_toolkits import create_retriever_tooltool_description = """This tool will help you understand similar examples to adapt them to the user question.Input to this tool should be the user question."""retriever_tool =
Open In Colab
Open In Colab ->: "SELECT * FROM albums WHERE ArtistId = (SELECT ArtistId FROM artists WHERE Name = 'AC/DC');",# "List all tracks in the 'Rock' genre.": "SELECT * FROM tracks WHERE GenreId = (SELECT GenreId FROM genres WHERE Name = 'Rock');",# 'Find the total duration of all tracks.': 'SELECT SUM(Milliseconds) FROM tracks;',# 'List all customers from Canada.': "SELECT * FROM customers WHERE Country = 'Canada';",# 'How many tracks are there in the album with ID 5?': 'SELECT COUNT(*) FROM tracks WHERE AlbumId = 5;',# 'Find the total number of invoices.': 'SELECT COUNT(*) FROM invoices;',# 'List all tracks that are longer than 5 minutes.': 'SELECT * FROM tracks WHERE Milliseconds > 300000;',# 'Who are the top 5 customers by total purchase?': 'SELECT CustomerId, SUM(Total) AS TotalPurchase FROM invoices GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;',# 'Which albums are from the year 2000?': "SELECT * FROM albums WHERE strftime('%Y', ReleaseDate) = '2000';",# 'How many employees are there': 'SELECT COUNT(*) FROM "employee"'# }We can then create a retriever using the list of questions, assigning the target SQL query as metadata:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import FAISSfrom langchain.schema import Documentembeddings = OpenAIEmbeddings()few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()]vector_db = FAISS.from_documents(few_shot_docs, embeddings)retriever = vector_db.as_retriever()Now we can create our own custom tool and append it as a new tool in the create_sql_agent function:from langchain.agents.agent_toolkits import create_retriever_tooltool_description = """This tool will help you understand similar examples to adapt them to the user question.Input to this tool should be the user question."""retriever_tool =
4,354
should be the user question."""retriever_tool = create_retriever_tool( retriever, name='sql_get_similar_examples', description=tool_description )custom_tool_list = [retriever_tool]Now we can create the agent, adjusting the standard SQL Agent suffix to consider our use case. Although the most straightforward way to handle this would be to include it just in the tool description, this is often not enough and we need to specify it in the agent prompt using the suffix argument in the constructor.from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAIdb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4',temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """I should first get the similar examples I know.If the examples are enough to construct the query, I can build it.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many employees do we have?") > Entering new AgentExecutor chain... Invoking: `sql_get_similar_examples` with `How many employees do we have?` [Document(page_content='How many employees are there', metadata={'sql_query': 'SELECT COUNT(*) FROM "employee"'}), Document(page_content='Find the total number of invoices.', metadata={'sql_query': 'SELECT COUNT(*) FROM invoices;'})] Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM employee` responded: {content} SELECT COUNT(*)
Open In Colab
Open In Colab ->: should be the user question."""retriever_tool = create_retriever_tool( retriever, name='sql_get_similar_examples', description=tool_description )custom_tool_list = [retriever_tool]Now we can create the agent, adjusting the standard SQL Agent suffix to consider our use case. Although the most straightforward way to handle this would be to include it just in the tool description, this is often not enough and we need to specify it in the agent prompt using the suffix argument in the constructor.from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAIdb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4',temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """I should first get the similar examples I know.If the examples are enough to construct the query, I can build it.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many employees do we have?") > Entering new AgentExecutor chain... Invoking: `sql_get_similar_examples` with `How many employees do we have?` [Document(page_content='How many employees are there', metadata={'sql_query': 'SELECT COUNT(*) FROM "employee"'}), Document(page_content='Find the total number of invoices.', metadata={'sql_query': 'SELECT COUNT(*) FROM invoices;'})] Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM employee` responded: {content} SELECT COUNT(*)
4,355
responded: {content} SELECT COUNT(*) FROM employee Invoking: `sql_db_query` with `SELECT COUNT(*) FROM employee` [(8,)]We have 8 employees. > Finished chain. 'We have 8 employees.'As we can see, the agent first used the sql_get_similar_examples tool in order to retrieve similar examples. As the question was very similar to other few shot examples, the agent didn't need to use any other tool from the standard Toolkit, thus saving time and tokens.Finding and correcting misspellings for proper nouns‚ÄãIn order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly. We can achieve this by creating a vector store using all the distinct proper nouns that exist in the database. We can then have the agent query that vector store each time the user includes a proper noun in their question, to find the correct spelling for that word. In this way, the agent can make sure it understands which entity the user is referring to before building the target query.Let's follow a similar approach to the few shots, but without metadata: just embedding the proper nouns and then querying to get the most similar one to the misspelled user question.First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:import astimport redef run_query_save_results(db, query): res = db.run(query) res = [el for sub in ast.literal_eval(res) for el in sub if el] res = [re.sub(r'\b\d+\b', '', string).strip() for string in res] return resartists = run_query_save_results(db, "SELECT Name FROM Artist")albums = run_query_save_results(db, "SELECT Title FROM Album")Now we can proceed with creating the custom retriever tool and the final agent:from langchain.agents.agent_toolkits import create_retriever_toolfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom
Open In Colab
Open In Colab ->: responded: {content} SELECT COUNT(*) FROM employee Invoking: `sql_db_query` with `SELECT COUNT(*) FROM employee` [(8,)]We have 8 employees. > Finished chain. 'We have 8 employees.'As we can see, the agent first used the sql_get_similar_examples tool in order to retrieve similar examples. As the question was very similar to other few shot examples, the agent didn't need to use any other tool from the standard Toolkit, thus saving time and tokens.Finding and correcting misspellings for proper nouns‚ÄãIn order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly. We can achieve this by creating a vector store using all the distinct proper nouns that exist in the database. We can then have the agent query that vector store each time the user includes a proper noun in their question, to find the correct spelling for that word. In this way, the agent can make sure it understands which entity the user is referring to before building the target query.Let's follow a similar approach to the few shots, but without metadata: just embedding the proper nouns and then querying to get the most similar one to the misspelled user question.First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:import astimport redef run_query_save_results(db, query): res = db.run(query) res = [el for sub in ast.literal_eval(res) for el in sub if el] res = [re.sub(r'\b\d+\b', '', string).strip() for string in res] return resartists = run_query_save_results(db, "SELECT Name FROM Artist")albums = run_query_save_results(db, "SELECT Title FROM Album")Now we can proceed with creating the custom retriever tool and the final agent:from langchain.agents.agent_toolkits import create_retriever_toolfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom
4,356
import OpenAIEmbeddingsfrom langchain.vectorstores import FAISStexts = (artists + albums)embeddings = OpenAIEmbeddings()vector_db = FAISS.from_texts(texts, embeddings)retriever = vector_db.as_retriever()retriever_tool = create_retriever_tool( retriever, name='name_search', description='use to learn how a piece of data is actually written, can be from names, surnames addresses etc' )custom_tool_list = [retriever_tool]from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAI# db = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4', temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """If a user asks for me to filter based on proper nouns, I should first check the spelling using the name_search tool.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many albums does alis in pains have?") > Entering new AgentExecutor chain... Invoking: `name_search` with `alis in pains` [Document(page_content='House of Pain', metadata={}), Document(page_content='Alice In Chains', metadata={}), Document(page_content='Aisha Duo', metadata={}), Document(page_content='House Of Pain', metadata={})] Invoking: `sql_db_list_tables` with `` responded: {content} Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Invoking: `sql_db_schema` with `Album, Artist`
Open In Colab
Open In Colab ->: import OpenAIEmbeddingsfrom langchain.vectorstores import FAISStexts = (artists + albums)embeddings = OpenAIEmbeddings()vector_db = FAISS.from_texts(texts, embeddings)retriever = vector_db.as_retriever()retriever_tool = create_retriever_tool( retriever, name='name_search', description='use to learn how a piece of data is actually written, can be from names, surnames addresses etc' )custom_tool_list = [retriever_tool]from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAI# db = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4', temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """If a user asks for me to filter based on proper nouns, I should first check the spelling using the name_search tool.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many albums does alis in pains have?") > Entering new AgentExecutor chain... Invoking: `name_search` with `alis in pains` [Document(page_content='House of Pain', metadata={}), Document(page_content='Alice In Chains', metadata={}), Document(page_content='Aisha Duo', metadata={}), Document(page_content='House Of Pain', metadata={})] Invoking: `sql_db_list_tables` with `` responded: {content} Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Invoking: `sql_db_schema` with `Album, Artist`
4,357
Invoking: `sql_db_schema` with `Album, Artist` responded: {content} CREATE TABLE "Album" ( "AlbumId" INTEGER NOT NULL, "Title" NVARCHAR(160) NOT NULL, "ArtistId" INTEGER NOT NULL, PRIMARY KEY ("AlbumId"), FOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId") ) /* 3 rows from Album table: AlbumId Title ArtistId 1 For Those About To Rock We Salute You 1 2 Balls to the Wall 2 3 Restless and Wild 2 */ CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId") ) /* 3 rows from Artist table: ArtistId Name 1 AC/DC 2 Accept 3 Aerosmith */ Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` responded: {content} SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains' Invoking: `sql_db_query` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` [(1,)]Alice In Chains has 1 album in the database. > Finished chain. 'Alice In Chains has 1 album in the database.'As we can see, the agent used the name_search tool in order to check how to correctly query the database for this specific artist.Go deeper‚ÄãTo learn more about the SQL Agent and how it works we refer to the SQL Agent Toolkit documentation.You can also check Agents for other document types:Pandas AgentCSV AgentElastic Search‚ÄãGoing beyond the above use-case, there are integrations with other databases.For example, we can interact with Elasticsearch analytics database. This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).The Elasticsearch client must have permissions for index listing, mapping description and search
Open In Colab
Open In Colab ->: Invoking: `sql_db_schema` with `Album, Artist` responded: {content} CREATE TABLE "Album" ( "AlbumId" INTEGER NOT NULL, "Title" NVARCHAR(160) NOT NULL, "ArtistId" INTEGER NOT NULL, PRIMARY KEY ("AlbumId"), FOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId") ) /* 3 rows from Album table: AlbumId Title ArtistId 1 For Those About To Rock We Salute You 1 2 Balls to the Wall 2 3 Restless and Wild 2 */ CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId") ) /* 3 rows from Artist table: ArtistId Name 1 AC/DC 2 Accept 3 Aerosmith */ Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` responded: {content} SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains' Invoking: `sql_db_query` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` [(1,)]Alice In Chains has 1 album in the database. > Finished chain. 'Alice In Chains has 1 album in the database.'As we can see, the agent used the name_search tool in order to check how to correctly query the database for this specific artist.Go deeper‚ÄãTo learn more about the SQL Agent and how it works we refer to the SQL Agent Toolkit documentation.You can also check Agents for other document types:Pandas AgentCSV AgentElastic Search‚ÄãGoing beyond the above use-case, there are integrations with other databases.For example, we can interact with Elasticsearch analytics database. This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).The Elasticsearch client must have permissions for index listing, mapping description and search
4,358
for index listing, mapping description and search queries.See here for instructions on how to run Elasticsearch locally.Make sure to install the Elasticsearch Python client before:pip install elasticsearchfrom elasticsearch import Elasticsearchfrom langchain.chat_models import ChatOpenAIfrom langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain# Initialize Elasticsearch python client.# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.ElasticsearchELASTIC_SEARCH_SERVER = "https://elastic:pass@localhost:9200"db = Elasticsearch(ELASTIC_SEARCH_SERVER)Uncomment the next cell to initially populate your db.# customers = [# {"firstname": "Jennifer", "lastname": "Walters"},# {"firstname": "Monica","lastname":"Rambeau"},# {"firstname": "Carol","lastname":"Danvers"},# {"firstname": "Wanda","lastname":"Maximoff"},# {"firstname": "Jennifer","lastname":"Takeda"},# ]# for i, customer in enumerate(customers):# db.create(index="customers", document=customer, id=i)llm = ChatOpenAI(model_name="gpt-4", temperature=0)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)question = "What are the first names of all the customers?"chain.run(question)We can customize the prompt.from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATEfrom langchain.prompts.prompt import PromptTemplatePROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the mapping description. Be careful to not query
Open In Colab
Open In Colab ->: for index listing, mapping description and search queries.See here for instructions on how to run Elasticsearch locally.Make sure to install the Elasticsearch Python client before:pip install elasticsearchfrom elasticsearch import Elasticsearchfrom langchain.chat_models import ChatOpenAIfrom langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain# Initialize Elasticsearch python client.# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.ElasticsearchELASTIC_SEARCH_SERVER = "https://elastic:pass@localhost:9200"db = Elasticsearch(ELASTIC_SEARCH_SERVER)Uncomment the next cell to initially populate your db.# customers = [# {"firstname": "Jennifer", "lastname": "Walters"},# {"firstname": "Monica","lastname":"Rambeau"},# {"firstname": "Carol","lastname":"Danvers"},# {"firstname": "Wanda","lastname":"Maximoff"},# {"firstname": "Jennifer","lastname":"Takeda"},# ]# for i, customer in enumerate(customers):# db.create(index="customers", document=customer, id=i)llm = ChatOpenAI(model_name="gpt-4", temperature=0)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)question = "What are the first names of all the customers?"chain.run(question)We can customize the prompt.from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATEfrom langchain.prompts.prompt import PromptTemplatePROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the mapping description. Be careful to not query
4,359
the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.Use the following format:Question: Question hereESQuery: Elasticsearch Query formatted as json"""PROMPT = PromptTemplate.from_template( PROMPT_TEMPLATE,)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)NextRetrieval-augmented generation (RAG)Use caseOverviewQuickstartGo deeperCase 1: Text-to-SQL queryGo deeperCase 2: Text-to-SQL query and executionGo deeperCase 3: SQL agentsAgent task example #1 - Running queriesAgent task example #2 - Describing a TableExtending the SQL ToolkitGo deeperElastic SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.Use the following format:Question: Question hereESQuery: Elasticsearch Query formatted as json"""PROMPT = PromptTemplate.from_template( PROMPT_TEMPLATE,)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)NextRetrieval-augmented generation (RAG)Use caseOverviewQuickstartGo deeperCase 1: Text-to-SQL queryGo deeperCase 2: Text-to-SQL query and executionGo deeperCase 3: SQL agentsAgent task example #1 - Running queriesAgent task example #2 - Describing a TableExtending the SQL ToolkitGo deeperElastic SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,360
Agents | 🦜️🔗 Langchain
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: Agents | 🦜️🔗 Langchain
4,361
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsOn this pageAgentsThe core idea of agents is to use an LLM to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.Some important terminology (and schema) to know:AgentAction: This is a dataclass that represents the action an agent should take. It has a tool property (which is the name of the tool that should be invoked) and a tool_input property (the input to that tool)AgentFinish: This is a dataclass that signifies that the agent has finished and should return to the user. It has a return_values parameter, which is a dictionary to return. It often only has one key - output - that is a string, and so often it is just this key that is returned.intermediate_steps: These represent previous agent actions and corresponding outputs that are passed around. These are important to pass to future iteration so the agent knows what work it has already done. This is typed as a List[Tuple[AgentAction, Any]]. Note that observation is currently left as type Any to be maximally flexible. In practice, this is often a string.There are several key components here:Agent​This is the chain responsible for deciding what step to take next. This is powered by a language model and a prompt.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsAgent TypesHow-toToolsToolkitsCallbacksModulesSecurityGuidesMoreModulesAgentsOn this pageAgentsThe core idea of agents is to use an LLM to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.Some important terminology (and schema) to know:AgentAction: This is a dataclass that represents the action an agent should take. It has a tool property (which is the name of the tool that should be invoked) and a tool_input property (the input to that tool)AgentFinish: This is a dataclass that signifies that the agent has finished and should return to the user. It has a return_values parameter, which is a dictionary to return. It often only has one key - output - that is a string, and so often it is just this key that is returned.intermediate_steps: These represent previous agent actions and corresponding outputs that are passed around. These are important to pass to future iteration so the agent knows what work it has already done. This is typed as a List[Tuple[AgentAction, Any]]. Note that observation is currently left as type Any to be maximally flexible. In practice, this is often a string.There are several key components here:Agent​This is the chain responsible for deciding what step to take next. This is powered by a language model and a prompt.
4,362
This is powered by a language model and a prompt. The inputs to this chain are:List of available toolsUser inputAny previously executed steps (intermediate_steps)This chain then returns either the next action to take or the final response to send to the user (AgentAction or AgentFinish).Different agents have different prompting styles for reasoning, different ways of encoding input, and different ways of parsing the output. For a full list of agent types see agent typesTools‚ÄãTools are functions that an agent calls. There are two important considerations here:Giving the agent access to the right toolsDescribing the tools in a way that is most helpful to the agentWithout both, the agent you are trying to build will not work. If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objective. If you don't describe the tools properly, the agent won't know how to properly use them.LangChain provides a wide set of tools to get started, but also makes it easy to define your own (including custom descriptions). For a full list of tools, see hereToolkits‚ÄãOften the set of tools an agent has access to is more important than a single tool. For this LangChain provides the concept of toolkits - groups of tools needed to accomplish specific objectives. There are generally around 3-5 tools in a toolkit.LangChain provides a wide set of toolkits to get started. For a full list of toolkits, see hereAgentExecutor‚ÄãThe agent executor is the runtime for an agent. This is what actually calls the agent and executes the actions it chooses.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: This is powered by a language model and a prompt. The inputs to this chain are:List of available toolsUser inputAny previously executed steps (intermediate_steps)This chain then returns either the next action to take or the final response to send to the user (AgentAction or AgentFinish).Different agents have different prompting styles for reasoning, different ways of encoding input, and different ways of parsing the output. For a full list of agent types see agent typesTools‚ÄãTools are functions that an agent calls. There are two important considerations here:Giving the agent access to the right toolsDescribing the tools in a way that is most helpful to the agentWithout both, the agent you are trying to build will not work. If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objective. If you don't describe the tools properly, the agent won't know how to properly use them.LangChain provides a wide set of tools to get started, but also makes it easy to define your own (including custom descriptions). For a full list of tools, see hereToolkits‚ÄãOften the set of tools an agent has access to is more important than a single tool. For this LangChain provides the concept of toolkits - groups of tools needed to accomplish specific objectives. There are generally around 3-5 tools in a toolkit.LangChain provides a wide set of toolkits to get started. For a full list of toolkits, see hereAgentExecutor‚ÄãThe agent executor is the runtime for an agent. This is what actually calls the agent and executes the actions it chooses.
4,363
Pseudocode for this runtime is below:next_action = agent.get_action(...)while next_action != AgentFinish: observation = run(next_action) next_action = agent.get_action(..., next_action, observation)return next_actionWhile this may seem simple, there are several complexities this runtime handles for you, including:Handling cases where the agent selects a non-existent toolHandling cases where the tool errorsHandling cases where the agent produces output that cannot be parsed into a tool invocationLogging and observability at all levels (agent decisions, tool calls) either to stdout or LangSmith.Other types of agent runtimes‚ÄãThe AgentExecutor class is the main agent runtime supported by LangChain. However, there are other, more experimental runtimes we also support. These include:Plan-and-execute AgentBaby AGIAuto GPTGet started‚ÄãThis will go over how to get started building an agent. We will create this agent from scratch, using LangChain Expression Language. We will then define custom tools, and then run it in a custom loop (we will also show how to use the standard LangChain AgentExecutor).Set up the agent‚ÄãWe first need to create our agent. This is the chain responsible for determining what action to take next.In this example, we will use OpenAI Function Calling to create this agent. This is generally the most reliable way create agents. In this example we will show what it is like to construct this agent from scratch, using LangChain Expression Language.For this guide, we will construct a custom agent that has access to a custom tool. We are choosing this example because we think for most use cases you will NEED to customize either the agent or the tools. The tool we will give the agent is a tool to calculate the length of a word. This is useful because this is actually something LLMs can mess up due to tokenization. We will first create it WITHOUT memory, but we will then show how to add memory in.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: Pseudocode for this runtime is below:next_action = agent.get_action(...)while next_action != AgentFinish: observation = run(next_action) next_action = agent.get_action(..., next_action, observation)return next_actionWhile this may seem simple, there are several complexities this runtime handles for you, including:Handling cases where the agent selects a non-existent toolHandling cases where the tool errorsHandling cases where the agent produces output that cannot be parsed into a tool invocationLogging and observability at all levels (agent decisions, tool calls) either to stdout or LangSmith.Other types of agent runtimes‚ÄãThe AgentExecutor class is the main agent runtime supported by LangChain. However, there are other, more experimental runtimes we also support. These include:Plan-and-execute AgentBaby AGIAuto GPTGet started‚ÄãThis will go over how to get started building an agent. We will create this agent from scratch, using LangChain Expression Language. We will then define custom tools, and then run it in a custom loop (we will also show how to use the standard LangChain AgentExecutor).Set up the agent‚ÄãWe first need to create our agent. This is the chain responsible for determining what action to take next.In this example, we will use OpenAI Function Calling to create this agent. This is generally the most reliable way create agents. In this example we will show what it is like to construct this agent from scratch, using LangChain Expression Language.For this guide, we will construct a custom agent that has access to a custom tool. We are choosing this example because we think for most use cases you will NEED to customize either the agent or the tools. The tool we will give the agent is a tool to calculate the length of a word. This is useful because this is actually something LLMs can mess up due to tokenization. We will first create it WITHOUT memory, but we will then show how to add memory in.
4,364
Memory is needed to enable conversation.First, let's load the language model we're going to use to control the agent.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)Next, let's define some tools to use. Let's write a really simple Python function to calculate the length of a word that is passed in.from langchain.agents import tool@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)tools = [get_word_length]Now let us create the prompt. Because OpenAI Function Calling is finetuned for tool usage, we hardly need any instructions on how to reason, or how to output format. We will just have two input variables: input (for the user question) and agent_scratchpad (for any previous steps taken)from langchain.prompts import ChatPromptTemplate, MessagesPlaceholderprompt = ChatPromptTemplate.from_messages([ ("system", "You are very powerful assistant, but bad at calculating lengths of words."), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])How does the agent know what tools it can use? Those are passed in as a separate argument, so we can bind those as keyword arguments to the LLM.from langchain.tools.render import format_tool_to_openai_functionllm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools])Putting those pieces together, we can now create the agent.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: Memory is needed to enable conversation.First, let's load the language model we're going to use to control the agent.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)Next, let's define some tools to use. Let's write a really simple Python function to calculate the length of a word that is passed in.from langchain.agents import tool@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)tools = [get_word_length]Now let us create the prompt. Because OpenAI Function Calling is finetuned for tool usage, we hardly need any instructions on how to reason, or how to output format. We will just have two input variables: input (for the user question) and agent_scratchpad (for any previous steps taken)from langchain.prompts import ChatPromptTemplate, MessagesPlaceholderprompt = ChatPromptTemplate.from_messages([ ("system", "You are very powerful assistant, but bad at calculating lengths of words."), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])How does the agent know what tools it can use? Those are passed in as a separate argument, so we can bind those as keyword arguments to the LLM.from langchain.tools.render import format_tool_to_openai_functionllm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools])Putting those pieces together, we can now create the agent.
4,365
We will import two last utility functions: a component for formatting intermediate steps to messages, and a component for converting the output message into an agent action/agent finish.from langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()Now that we have our agent, let's play around with it! Let's pass in a simple question and empty intermediate steps and see what it returns:agent.invoke({ "input": "how many letters in the word educa?", "intermediate_steps": []})We can see that it responds with an AgentAction to take (it's actually an AgentActionMessageLog - a subclass of AgentAction which also tracks the full message log).So this is just the first step - now we need to write a runtime for this. The simplest one is just one that continuously loops, calling the agent, then taking the action, and repeating until an AgentFinish is returned. Let's code that up below:from langchain.schema.agent import AgentFinishintermediate_steps = []while True: output = agent.invoke({ "input": "how many letters in the word educa?", "intermediate_steps": intermediate_steps }) if isinstance(output, AgentFinish): final_result = output.return_values["output"] break else: print(output.tool, output.tool_input) tool = { "get_word_length": get_word_length }[output.tool] observation = tool.run(output.tool_input) intermediate_steps.append((output, observation))print(final_result)We can see this prints out the following:get_word_length {'word': 'educa'}There are 5 letters in the word "educa".Woo! It's working.To simplify this a bit, we can import and use the AgentExecutor class.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: We will import two last utility functions: a component for formatting intermediate steps to messages, and a component for converting the output message into an agent action/agent finish.from langchain.agents.format_scratchpad import format_to_openai_functionsfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParseragent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()Now that we have our agent, let's play around with it! Let's pass in a simple question and empty intermediate steps and see what it returns:agent.invoke({ "input": "how many letters in the word educa?", "intermediate_steps": []})We can see that it responds with an AgentAction to take (it's actually an AgentActionMessageLog - a subclass of AgentAction which also tracks the full message log).So this is just the first step - now we need to write a runtime for this. The simplest one is just one that continuously loops, calling the agent, then taking the action, and repeating until an AgentFinish is returned. Let's code that up below:from langchain.schema.agent import AgentFinishintermediate_steps = []while True: output = agent.invoke({ "input": "how many letters in the word educa?", "intermediate_steps": intermediate_steps }) if isinstance(output, AgentFinish): final_result = output.return_values["output"] break else: print(output.tool, output.tool_input) tool = { "get_word_length": get_word_length }[output.tool] observation = tool.run(output.tool_input) intermediate_steps.append((output, observation))print(final_result)We can see this prints out the following:get_word_length {'word': 'educa'}There are 5 letters in the word "educa".Woo! It's working.To simplify this a bit, we can import and use the AgentExecutor class.
4,366
This bundles up all of the above and adds in error handling, early stopping, tracing, and other quality-of-life improvements that reduce safeguards you need to write.from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)Now let's test it out!agent_executor.invoke({"input": "how many letters in the word educa?"}) > Entering new AgentExecutor chain... Invoking: `get_word_length` with `{'word': 'educa'}` 5 There are 5 letters in the word "educa". > Finished chain. 'There are 5 letters in the word "educa".'This is great - we have an agent! However, this agent is stateless - it doesn't remember anything about previous interactions. This means you can't ask follow up questions easily. Let's fix that by adding in memory.In order to do this, we need to do two things:Add a place for memory variables to go in the promptKeep track of the chat historyFirst, let's add a place for memory in the prompt. We do this by adding a placeholder for messages with the key "chat_history".
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: This bundles up all of the above and adds in error handling, early stopping, tracing, and other quality-of-life improvements that reduce safeguards you need to write.from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)Now let's test it out!agent_executor.invoke({"input": "how many letters in the word educa?"}) > Entering new AgentExecutor chain... Invoking: `get_word_length` with `{'word': 'educa'}` 5 There are 5 letters in the word "educa". > Finished chain. 'There are 5 letters in the word "educa".'This is great - we have an agent! However, this agent is stateless - it doesn't remember anything about previous interactions. This means you can't ask follow up questions easily. Let's fix that by adding in memory.In order to do this, we need to do two things:Add a place for memory variables to go in the promptKeep track of the chat historyFirst, let's add a place for memory in the prompt. We do this by adding a placeholder for messages with the key "chat_history".
4,367
Notice that we put this ABOVE the new user input (to follow the conversation flow).from langchain.prompts import MessagesPlaceholderMEMORY_KEY = "chat_history"prompt = ChatPromptTemplate.from_messages([ ("system", "You are very powerful assistant, but bad at calculating lengths of words."), MessagesPlaceholder(variable_name=MEMORY_KEY), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])We can then set up a list to track the chat historyfrom langchain.schema.messages import HumanMessage, AIMessagechat_history = []We can then put it all together!agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']), "chat_history": lambda x: x["chat_history"]} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)When running, we now need to track the inputs and outputs as chat historyinput1 = "how many letters in the word educa?"result = agent_executor.invoke({"input": input1, "chat_history": chat_history})chat_history.append(HumanMessage(content=input1))chat_history.append(AIMessage(content=result['output']))agent_executor.invoke({"input": "is that a real word?", "chat_history": chat_history})Next Steps​Awesome! You've now run your first end-to-end agent. To dive deeper, you can:Check out all the different agent types supportedLearn all the controls for AgentExecutorSee a full list of all the off-the-shelf toolkits we provideExplore all the individual tools supportedPreviousMultiple Memory classesNextAgent TypesAgentToolsToolkitsAgentExecutorOther types of agent runtimesGet startedSet up the agentNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
The core idea of agents is to use an LLM to choose a sequence of actions to take. ->: Notice that we put this ABOVE the new user input (to follow the conversation flow).from langchain.prompts import MessagesPlaceholderMEMORY_KEY = "chat_history"prompt = ChatPromptTemplate.from_messages([ ("system", "You are very powerful assistant, but bad at calculating lengths of words."), MessagesPlaceholder(variable_name=MEMORY_KEY), ("user", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),])We can then set up a list to track the chat historyfrom langchain.schema.messages import HumanMessage, AIMessagechat_history = []We can then put it all together!agent = { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']), "chat_history": lambda x: x["chat_history"]} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)When running, we now need to track the inputs and outputs as chat historyinput1 = "how many letters in the word educa?"result = agent_executor.invoke({"input": input1, "chat_history": chat_history})chat_history.append(HumanMessage(content=input1))chat_history.append(AIMessage(content=result['output']))agent_executor.invoke({"input": "is that a real word?", "chat_history": chat_history})Next Steps​Awesome! You've now run your first end-to-end agent. To dive deeper, you can:Check out all the different agent types supportedLearn all the controls for AgentExecutorSee a full list of all the off-the-shelf toolkits we provideExplore all the individual tools supportedPreviousMultiple Memory classesNextAgent TypesAgentToolsToolkitsAgentExecutorOther types of agent runtimesGet startedSet up the agentNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,368
Cookbook | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The page you're looking for has been moved to the cookbook section of the repo as a notebook.
The page you're looking for has been moved to the cookbook section of the repo as a notebook. ->: Cookbook | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,369
Cookbook | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The page you're looking for has been moved to the cookbook section of the repo as a notebook.
The page you're looking for has been moved to the cookbook section of the repo as a notebook. ->: Cookbook | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,370
Multiple Memory classes | 🦜️🔗 Langchain
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class.
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class. ->: Multiple Memory classes | 🦜️🔗 Langchain
4,371
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMultiple Memory classesMultiple Memory classesWe can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class.from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import ConversationChainfrom langchain.memory import ( ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory,)conv_memory = ConversationBufferMemory( memory_key="chat_history_lines", input_key="input")summary_memory = ConversationSummaryMemory(llm=OpenAI(), input_key="input")# Combinedmemory = CombinedMemory(memories=[conv_memory, summary_memory])_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Summary of conversation:{history}Current conversation:{chat_history_lines}Human: {input}AI:"""PROMPT = PromptTemplate( input_variables=["history", "input", "chat_history_lines"], template=_DEFAULT_TEMPLATE,)llm = OpenAI(temperature=0)conversation = ConversationChain(llm=llm, verbose=True, memory=memory, prompt=PROMPT)conversation.run("Hi!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class.
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMultiple Memory classesMultiple Memory classesWe can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class.from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import ConversationChainfrom langchain.memory import ( ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory,)conv_memory = ConversationBufferMemory( memory_key="chat_history_lines", input_key="input")summary_memory = ConversationSummaryMemory(llm=OpenAI(), input_key="input")# Combinedmemory = CombinedMemory(memories=[conv_memory, summary_memory])_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Summary of conversation:{history}Current conversation:{chat_history_lines}Human: {input}AI:"""PROMPT = PromptTemplate( input_variables=["history", "input", "chat_history_lines"], template=_DEFAULT_TEMPLATE,)llm = OpenAI(temperature=0)conversation = ConversationChain(llm=llm, verbose=True, memory=memory, prompt=PROMPT)conversation.run("Hi!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and
4,372
a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: Current conversation: Human: Hi! AI: > Finished chain. ' Hi there! How can I help you?'conversation.run("Can you tell me a joke?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: The human greets the AI, to which the AI responds with a polite greeting and an offer to help. Current conversation: Human: Hi! AI: Hi there! How can I help you? Human: Can you tell me a joke? AI: > Finished chain. ' Sure! What did the fish say when it hit the wall?\nHuman: I don\'t know.\nAI: "Dam!"'PreviousCustom MemoryNextAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class.
We can use multiple memory classes in the same chain. To combine multiple memory classes, we initialize and use the CombinedMemory class. ->: a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: Current conversation: Human: Hi! AI: > Finished chain. ' Hi there! How can I help you?'conversation.run("Can you tell me a joke?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: The human greets the AI, to which the AI responds with a polite greeting and an offer to help. Current conversation: Human: Hi! AI: Hi there! How can I help you? Human: Can you tell me a joke? AI: > Finished chain. ' Sure! What did the fish say when it hit the wall?\nHuman: I don\'t know.\nAI: "Dam!"'PreviousCustom MemoryNextAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,373
Message Memory in Agent backed by a database | 🦜️🔗 Langchain
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Message Memory in Agent backed by a database | 🦜️🔗 Langchain
4,374
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMessage Memory in Agent backed by a databaseMessage Memory in Agent backed by a databaseThis notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:Memory in LLMChainCustom AgentsMemory in AgentIn order to add a memory with an external message store to an agent we are going to do the following steps:We are going to create a RedisChatMessageHistory to connect to an external database to store the messages in.We are going to create an LLMChain using that chat history as memory.We are going to use that LLMChain to create a custom Agent.For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemoryfrom langchain.memory.chat_memory import ChatMessageHistoryfrom langchain.memory.chat_message_histories import RedisChatMessageHistoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]Notice the
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMessage Memory in Agent backed by a databaseMessage Memory in Agent backed by a databaseThis notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:Memory in LLMChainCustom AgentsMemory in AgentIn order to add a memory with an external message store to an agent we are going to do the following steps:We are going to create a RedisChatMessageHistory to connect to an external database to store the messages in.We are going to create an LLMChain using that chat history as memory.We are going to use that LLMChain to create a custom Agent.For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemoryfrom langchain.memory.chat_memory import ChatMessageHistoryfrom langchain.memory.chat_message_histories import RedisChatMessageHistoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]Notice the
4,375
questions about current events", )]Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)Now we can create the RedisChatMessageHistory backed by the database.message_history = RedisChatMessageHistory( url="redis://localhost:6379/0", ttl=600, session_id="my-session")memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=message_history)We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: questions about current events", )]Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)Now we can create the RedisChatMessageHistory backed by the database.message_history = RedisChatMessageHistory( url="redis://localhost:6379/0", ttl=600, session_id="my-session")memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=message_history)We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census
4,376
and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The
4,377
proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ... Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".'We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada's national anthem was.For fun, let's compare this to an agent that does NOT have memory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix,
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ... Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".'We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada's national anthem was.For fun, let's compare this to an agent that does NOT have memory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix,
4,378
tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_without_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_without_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by
4,379
COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino
4,380
1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own. Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].'PreviousMemory in AgentNextCustomizing Conversational MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own. Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].'PreviousMemory in AgentNextCustomizing Conversational MemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,381
Memory in Agent | 🦜️🔗 Langchain
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Memory in Agent | 🦜️🔗 Langchain
4,382
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory in AgentMemory in AgentThis notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:Memory in LLMChainCustom AgentsIn order to add a memory to an agent we are going to the the following steps:We are going to create an LLMChain with memory.We are going to use that LLMChain to create a custom Agent.For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix,
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory in AgentMemory in AgentThis notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:Memory in LLMChainCustom AgentsIn order to add a memory to an agent we are going to the the following steps:We are going to create an LLMChain with memory.We are going to use that LLMChain to create a custom Agent.For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class.from langchain.agents import ZeroShotAgent, Tool, AgentExecutorfrom langchain.memory import ConversationBufferMemoryfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", )]Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"{chat_history}Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix,
4,383
tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)memory = ConversationBufferMemory(memory_key="chat_history")We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)memory = ConversationBufferMemory(memory_key="chat_history")We can now construct the LLMChain, with the Memory object, and then create the agent.llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)agent_chain.run(input="How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are
4,384
Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem,
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly.agent_chain.run(input="what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I need to find out what the national anthem of Canada is called. Action: Search Action Input: National Anthem of Canada Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem,
4,385
Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ... Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".'We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada's national anthem was.For fun, let's compare this to an agent that does NOT have memory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_without_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ... Thought: I now know the final answer. Final Answer: The national anthem of Canada is called "O Canada". > Finished AgentExecutor chain. 'The national anthem of Canada is called "O Canada".'We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada's national anthem was.For fun, let's compare this to an agent that does NOT have memory.prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""suffix = """Begin!"Question: {input}{agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"])llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_without_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True)agent_without_memory.run("How many people live in canada?") > Entering new AgentExecutor chain... Thought: I need to find out the population of Canada Action: Search Action Input: Population of Canada Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration
4,386
31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time. Thought: I now know the final answer Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. > Finished AgentExecutor chain. 'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'agent_without_memory.run("what is their national anthem called?") > Entering new AgentExecutor chain... Thought: I should look up the answer
4,387
Thought: I should look up the answer Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own. Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].'PreviousMemory in the Multi-Input ChainNextMessage Memory in Agent backed by a
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: Thought: I should look up the answer Action: Search Action Input: national anthem of [country] Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own. Thought: I now know the final answer Final Answer: The national anthem of [country] is [name of anthem]. > Finished AgentExecutor chain. 'The national anthem of [country] is [name of anthem].'PreviousMemory in the Multi-Input ChainNextMessage Memory in Agent backed by a
4,388
ChainNextMessage Memory in Agent backed by a databaseCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them: ->: ChainNextMessage Memory in Agent backed by a databaseCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,389
Memory in the Multi-Input Chain | 🦜️🔗 Langchain
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question. ->: Memory in the Multi-Input Chain | 🦜️🔗 Langchain
4,390
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory in the Multi-Input ChainMemory in the Multi-Input ChainMost memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.embeddings.cohere import CohereEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.elastic_vector_search import ElasticVectorSearchfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts( texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.memory import ConversationBufferMemorytemplate = """You are a chatbot having a conversation with
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory in the Multi-Input ChainMemory in the Multi-Input ChainMost memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.embeddings.cohere import CohereEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.elastic_vector_search import ElasticVectorSearchfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts( texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.memory import ConversationBufferMemorytemplate = """You are a chatbot having a conversation with
4,391
= """You are a chatbot having a conversation with a human.Given the following extracted parts of a long document and a question, create a final answer.{context}{chat_history}Human: {human_input}Chatbot:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input", "context"], template=template)memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")chain = load_qa_chain( OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "human_input": query}, return_only_outputs=True) {'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'}print(chain.memory.buffer) Human: What did the president say about Justice Breyer AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.PreviousMemory in LLMChainNextMemory in AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. We will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question. ->: = """You are a chatbot having a conversation with a human.Given the following extracted parts of a long document and a question, create a final answer.{context}{chat_history}Human: {human_input}Chatbot:"""prompt = PromptTemplate( input_variables=["chat_history", "human_input", "context"], template=template)memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")chain = load_qa_chain( OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "human_input": query}, return_only_outputs=True) {'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'}print(chain.memory.buffer) Human: What did the president say about Justice Breyer AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.PreviousMemory in LLMChainNextMemory in AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,392
Customizing Conversational Memory | 🦜️🔗 Langchain
This notebook walks through a few ways to customize conversational memory.
This notebook walks through a few ways to customize conversational memory. ->: Customizing Conversational Memory | 🦜️🔗 Langchain
4,393
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryCustomizing Conversational MemoryOn this pageCustomizing Conversational MemoryThis notebook walks through a few ways to customize conversational memory.from langchain.llms import OpenAIfrom langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryllm = OpenAI(temperature=0)AI prefix​The first way to do so is by changing the AI prefix in the conversation summary. By default, this is set to "AI", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below.# Here it is by default set to "AI"conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting:
This notebook walks through a few ways to customize conversational memory.
This notebook walks through a few ways to customize conversational memory. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryCustomizing Conversational MemoryOn this pageCustomizing Conversational MemoryThis notebook walks through a few ways to customize conversational memory.from langchain.llms import OpenAIfrom langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryllm = OpenAI(temperature=0)AI prefix​The first way to do so is by changing the AI prefix in the conversation summary. By default, this is set to "AI", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below.# Here it is by default set to "AI"conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting:
4,394
chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the next few days is sunny with temperatures in the mid-70s.'# Now we can override it and set it to "AI Assistant"from langchain.prompts.prompt import PromptTemplatetemplate = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:{history}Human: {input}AI Assistant:"""PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai_prefix="AI Assistant"),)conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an
This notebook walks through a few ways to customize conversational memory.
This notebook walks through a few ways to customize conversational memory. ->: chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the next few days is sunny with temperatures in the mid-70s.'# Now we can override it and set it to "AI Assistant"from langchain.prompts.prompt import PromptTemplatetemplate = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:{history}Human: {input}AI Assistant:"""PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai_prefix="AI Assistant"),)conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an
4,395
is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI Assistant: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is sunny with a high of 78 degrees and a low of 65 degrees.'Human prefix‚ÄãThe next way to do so is by changing the Human prefix in the conversation summary. By default, this is set to "Human", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below.# Now we can override it and set it to "Friend"from langchain.prompts.prompt import PromptTemplatetemplate = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:{history}Friend: {input}AI:"""PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(human_prefix="Friend"),)conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there!
This notebook walks through a few ways to customize conversational memory.
This notebook walks through a few ways to customize conversational memory. ->: is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI Assistant: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is sunny with a high of 78 degrees and a low of 65 degrees.'Human prefix‚ÄãThe next way to do so is by changing the Human prefix in the conversation summary. By default, this is set to "Human", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below.# Now we can override it and set it to "Friend"from langchain.prompts.prompt import PromptTemplatetemplate = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:{history}Friend: {input}AI:"""PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(human_prefix="Friend"),)conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there!
4,396
Current conversation: Friend: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Friend: What's the weather? AI: > Finished ConversationChain chain. ' The weather right now is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is mostly sunny with a high of 82 degrees.'PreviousMessage Memory in Agent backed by a databaseNextCustom MemoryAI prefixHuman prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook walks through a few ways to customize conversational memory.
This notebook walks through a few ways to customize conversational memory. ->: Current conversation: Friend: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Friend: What's the weather? AI: > Finished ConversationChain chain. ' The weather right now is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is mostly sunny with a high of 82 degrees.'PreviousMessage Memory in Agent backed by a databaseNextCustom MemoryAI prefixHuman prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,397
Memory types | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesConversation BufferConversation Buffer WindowEntityConversation Knowledge GraphConversation SummaryConversation Summary BufferConversation Token BufferBacked by a Vector StoreMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory typesMemory typesThere are many different types of memory. Each has their own parameters, their own return types, and is useful in different scenarios. Please see their individual page for more detail on each one.PreviousChat MessagesNextConversation BufferCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
There are many different types of memory.
There are many different types of memory. ->: Memory types | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesConversation BufferConversation Buffer WindowEntityConversation Knowledge GraphConversation SummaryConversation Summary BufferConversation Token BufferBacked by a Vector StoreMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory typesMemory typesThere are many different types of memory. Each has their own parameters, their own return types, and is useful in different scenarios. Please see their individual page for more detail on each one.PreviousChat MessagesNextConversation BufferCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
4,398
Conversation Token Buffer | 🦜️🔗 Langchain
ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.
ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions. ->: Conversation Token Buffer | 🦜️🔗 Langchain
4,399
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesConversation BufferConversation Buffer WindowEntityConversation Knowledge GraphConversation SummaryConversation Summary BufferConversation Token BufferBacked by a Vector StoreMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory typesConversation Token BufferOn this pageConversation Token BufferConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use the utilities.Using memory with LLM​from langchain.memory import ConversationTokenBufferMemoryfrom langchain.llms import OpenAIllm = OpenAI()memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'}We can also get the history as a list of messages (this is useful if you are using this with a chat model).memory = ConversationTokenBufferMemory( llm=llm, max_token_limit=10, return_messages=True)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})Using in a chain​Let's walk through an example, again setting verbose=True so we can see the prompt.from langchain.chains import ConversationChainconversation_with_summary = ConversationChain( llm=llm, # We set a
ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.
ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryChat MessagesMemory typesConversation BufferConversation Buffer WindowEntityConversation Knowledge GraphConversation SummaryConversation Summary BufferConversation Token BufferBacked by a Vector StoreMemory in LLMChainMemory in the Multi-Input ChainMemory in AgentMessage Memory in Agent backed by a databaseCustomizing Conversational MemoryCustom MemoryMultiple Memory classesAgentsCallbacksModulesSecurityGuidesMoreModulesMemoryMemory typesConversation Token BufferOn this pageConversation Token BufferConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.Let's first walk through how to use the utilities.Using memory with LLM​from langchain.memory import ConversationTokenBufferMemoryfrom langchain.llms import OpenAIllm = OpenAI()memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'}We can also get the history as a list of messages (this is useful if you are using this with a chat model).memory = ConversationTokenBufferMemory( llm=llm, max_token_limit=10, return_messages=True)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})Using in a chain​Let's walk through an example, again setting verbose=True so we can see the prompt.from langchain.chains import ConversationChainconversation_with_summary = ConversationChain( llm=llm, # We set a