id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
087ca05c8494-0 | langchain.memory.kg.ConversationKGMemory¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-1 | class langchain.memory.kg.ConversationKGMemory(*, chat_memory: ~langchain.schema.memory.BaseChatMessageHistory = None, output_key: ~typing.Optional[str] = None, input_key: ~typing.Optional[str] = None, return_messages: bool = False, k: int = 2, human_prefix: str = 'Human', ai_prefix: str = 'AI', kg: ~langchain.graphs.networkx_graph.NetworkxEntityGraph = None, knowledge_extraction_prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-2 | history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-3 | know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), llm: ~langchain.schema.language_model.BaseLanguageModel, summary_message_cls: | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-4 | llm: ~langchain.schema.language_model.BaseLanguageModel, summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>, memory_key: str = 'history')[source]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-5 | Bases: BaseChatMemory
Knowledge graph conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-6 | param entity_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-7 | line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-8 | param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 2¶
param kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-9 | param knowledge_extraction_prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-10 | Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
087ca05c8494-11 | param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param summary_message_cls: Type[langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
Number of previous utterances to include in the context.
clear() → None[source]¶
Clear memory contents.
get_current_entities(input_string: str) → List[str][source]¶
get_knowledge_triplets(input_string: str) → List[KnowledgeTriple][source]¶
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using ConversationKGMemory¶
Conversation Knowledge Graph Memory | https://api.python.langchain.com/en/latest/memory/langchain.memory.kg.ConversationKGMemory.html |
09223897f06b-0 | langchain.memory.chat_message_histories.file.FileChatMessageHistory¶
class langchain.memory.chat_message_histories.file.FileChatMessageHistory(file_path: str)[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in a local file.
Parameters
file_path – path of the local file to store the messages.
Methods
__init__(file_path)
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in the local file
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from the local file
Attributes
messages
Retrieve the messages from the local file
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in the local file
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from the local file
property messages: List[langchain.schema.messages.BaseMessage]¶
Retrieve the messages from the local file
Examples using FileChatMessageHistory¶
AutoGPT | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.file.FileChatMessageHistory.html |
3e4dd2f17d7e-0 | langchain.memory.token_buffer.ConversationTokenBufferMemory¶
class langchain.memory.token_buffer.ConversationTokenBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: BaseLanguageModel, memory_key: str = 'history', max_token_limit: int = 2000)[source]¶
Bases: BaseChatMemory
Conversation chat memory with token limit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer. Pruned.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
3e4dd2f17d7e-1 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using ConversationTokenBufferMemory¶
ConversationTokenBufferMemory | https://api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
e8bfa57a551d-0 | langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory¶
class langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
Bases: BaseChatMessageHistory
Chat message history stored in a Postgres database.
Methods
__init__(session_id[, connection_string, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in PostgreSQL
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from PostgreSQL
Attributes
messages
Retrieve the messages from PostgreSQL
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in PostgreSQL
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from PostgreSQL
property messages: List[langchain.schema.messages.BaseMessage]¶
Retrieve the messages from PostgreSQL
Examples using PostgresChatMessageHistory¶
Postgres Chat Message History | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory.html |
b5ee50f7b323-0 | langchain.memory.entity.InMemoryEntityStore¶
class langchain.memory.entity.InMemoryEntityStore(*, store: Dict[str, Optional[str]] = {})[source]¶
Bases: BaseEntityStore
In-memory Entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param store: Dict[str, Optional[str]] = {}¶
clear() → None[source]¶
Delete all entities from store.
delete(key: str) → None[source]¶
Delete entity value from store.
exists(key: str) → bool[source]¶
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store. | https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html |
2b8366f68686-0 | langchain.memory.simple.SimpleMemory¶
class langchain.memory.simple.SimpleMemory(*, memories: Dict[str, Any] = {})[source]¶
Bases: BaseMemory
Simple memory for storing context or other information that shouldn’t
ever change between prompts.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memories: Dict[str, Any] = {}¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed, my memory is set in stone.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.simple.SimpleMemory.html |
5d8042aff2da-0 | langchain.memory.chat_message_histories.in_memory.ChatMessageHistory¶
class langchain.memory.chat_message_histories.in_memory.ChatMessageHistory(*, messages: List[BaseMessage] = [])[source]¶
Bases: BaseChatMessageHistory, BaseModel
In memory implementation of chat message history.
Stores messages in an in memory list.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param messages: List[langchain.schema.messages.BaseMessage] = []¶
A list of Messages stored in-memory.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Remove all messages from the store
Examples using ChatMessageHistory¶
Adding Message Memory backed by a database to an Agent | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.in_memory.ChatMessageHistory.html |
34b6b657f3f0-0 | langchain.memory.zep_memory.ZepMemory¶
class langchain.memory.zep_memory.ZepMemory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history')[source]¶
Bases: ConversationBufferMemory
Persist your chain history to the Zep Memory Server.
The number of messages returned by Zep and when the Zep server summarizes chat
histories is configurable. See the Zep documentation for more details.
Documentation: https://docs.getzep.com
Example
memory = ZepMemory(
session_id=session_id, # Identifies your user or a user’s session
url=ZEP_API_URL, # Your Zep server’s URL
api_key=<your_api_key>, # Optional
memory_key=”history”, # Ensure this matches the key used in
# chain’s prompt template
return_messages=True, # Does your prompt template expect a string# or a list of Messages?
)
chain = LLMChain(memory=memory,…) # Configure your chain to use the ZepMemoryinstance
Note
To persist metadata alongside your chat history, your will need to create a
custom Chain class that overrides the prep_outputs method to include the metadata
in the call to self.memory.save_context.
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/ | https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html |
34b6b657f3f0-1 | https://docs.getzep.com/deployment/quickstart/
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Initialize ZepMemory.
Parameters
session_id (str) – Identifies your user or a user’s session
url (str, optional) – Your Zep server’s URL. Defaults to
“http://localhost:8000”.
api_key (Optional[str], optional) – Your Zep API key. Defaults to None.
output_key (Optional[str], optional) – The key to use for the output message.
Defaults to None.
input_key (Optional[str], optional) – The key to use for the input message.
Defaults to None.
return_messages (bool, optional) – Does your prompt template expect a string
or a list of Messages? Defaults to False
i.e. return a string.
human_prefix (str, optional) – The prefix to use for human messages.
Defaults to “Human”.
ai_prefix (str, optional) – The prefix to use for AI messages.
Defaults to “AI”.
memory_key (str, optional) – The key to use for the memory.
Defaults to “history”.
Ensure that this matches the key used in
chain’s prompt template.
param ai_prefix: str = 'AI'¶
param chat_memory: ZepChatMessageHistory [Required]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]¶
Return history buffer. | https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html |
34b6b657f3f0-2 | Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str], metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) – The inputs to the chain.
outputs (Dict[str, str]) – The outputs from the chain.
metadata (Optional[Dict[str, Any]], optional) – Any metadata to save with
the context. Defaults to None
Returns
None
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: Any¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using ZepMemory¶
Zep Memory | https://api.python.langchain.com/en/latest/memory/langchain.memory.zep_memory.ZepMemory.html |
7b0d9beab10a-0 | langchain.memory.chat_message_histories.redis.RedisChatMessageHistory¶
class langchain.memory.chat_message_histories.redis.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history stored in a Redis database.
Methods
__init__(session_id[, url, key_prefix, ttl])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in Redis
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from Redis
Attributes
key
Construct the record key to use
messages
Retrieve the messages from Redis
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in Redis
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from Redis
property key: str¶
Construct the record key to use
property messages: List[langchain.schema.messages.BaseMessage]¶
Retrieve the messages from Redis
Examples using RedisChatMessageHistory¶
Redis Chat Message History
Adding Message Memory backed by a database to an Agent | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.redis.RedisChatMessageHistory.html |
f2eb019e9cba-0 | langchain.memory.buffer.ConversationBufferMemory¶
class langchain.memory.buffer.ConversationBufferMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history')[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: Any¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids. | https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html |
f2eb019e9cba-1 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using ConversationBufferMemory¶
Gradio Tools
SceneXplain
Dynamodb Chat Message History
Chat Over Documents with Vectara
Bedrock
QA over Documents
Structure answers with OpenAI functions
Agent Debates with Tools
Adding Message Memory backed by a database to an Agent
How to add memory to a Multi-Input Chain
How to add Memory to an LLMChain
How to use multiple memory classes in the same chain
How to customize conversational memory
How to add Memory to an Agent
Shared memory across agents and tools
Add Memory to OpenAI Functions Agent
Retrieval QA using OpenAI functions | https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html |
ea2295591197-0 | langchain.memory.chat_memory.BaseChatMemory¶
class langchain.memory.chat_memory.BaseChatMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False)[source]¶
Bases: BaseMemory, ABC
Abstract base class for chat memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param chat_memory: langchain.schema.memory.BaseChatMessageHistory [Optional]¶
param input_key: Optional[str] = None¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None[source]¶
Clear memory contents.
abstract load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]¶
Return key-value pairs given the text input to the chain.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
abstract property memory_variables: List[str]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html |
ea2295591197-1 | abstract property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_memory.BaseChatMemory.html |
d0084768fc2b-0 | langchain.memory.utils.get_prompt_input_key¶
langchain.memory.utils.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) → str[source]¶
Get the prompt input key.
Parameters
inputs – Dict[str, Any]
memory_variables – List[str]
Returns
A prompt input key. | https://api.python.langchain.com/en/latest/memory/langchain.memory.utils.get_prompt_input_key.html |
687d9185f42c-0 | langchain.memory.summary.SummarizerMixin¶
class langchain.memory.summary.SummarizerMixin(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', llm: ~langchain.schema.language_model.BaseLanguageModel, prompt: ~langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls: ~typing.Type[~langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>)[source]¶
Bases: BaseModel
Mixin for summarizer.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param human_prefix: str = 'Human'¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html |
687d9185f42c-1 | param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True)¶
param summary_message_cls: Type[langchain.schema.messages.BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
predict_new_summary(messages: List[BaseMessage], existing_summary: str) → str[source]¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.summary.SummarizerMixin.html |
d45ecc7261b8-0 | langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory¶
class langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory(session_id: str, session: Session, keyspace: str, table_name: str = 'message_store', ttl_seconds: int | None = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history that stores history in Cassandra.
Parameters
session_id – arbitrary key that is used to store the messages
of a single chat session.
session – a Cassandra Session object (an open DB connection)
keyspace – name of the keyspace to use.
table_name – name of the table to use.
ttl_seconds – time-to-live (seconds) for automatic expiration
of stored entries. None (default) for no expiration.
Methods
__init__(session_id, session, keyspace[, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Write a message to the table
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from DB
Attributes
messages
Retrieve all session messages from DB
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Write a message to the table
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from DB
property messages: List[langchain.schema.messages.BaseMessage]¶
Retrieve all session messages from DB
Examples using CassandraChatMessageHistory¶ | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html |
d45ecc7261b8-1 | Retrieve all session messages from DB
Examples using CassandraChatMessageHistory¶
Cassandra Chat Message History
Cassandra | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cassandra.CassandraChatMessageHistory.html |
4d3363216cf2-0 | langchain.memory.buffer_window.ConversationBufferWindowMemory¶
class langchain.memory.buffer_window.ConversationBufferWindowMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history', k: int = 5)[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory inside a limited size window.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 5¶
Number of messages to store in buffer.
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property buffer: List[langchain.schema.messages.BaseMessage]¶
String buffer of memory.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
4d3363216cf2-1 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using ConversationBufferWindowMemory¶
Figma
Meta-Prompt
Voice Assistant
Create ChatGPT clone | https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
ec6ec3ef8686-0 | langchain.memory.entity.BaseEntityStore¶
class langchain.memory.entity.BaseEntityStore[source]¶
Bases: BaseModel, ABC
Abstract base class for Entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract clear() → None[source]¶
Delete all entities from store.
abstract delete(key: str) → None[source]¶
Delete entity value from store.
abstract exists(key: str) → bool[source]¶
Check if entity exists in store.
abstract get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
abstract set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store. | https://api.python.langchain.com/en/latest/memory/langchain.memory.entity.BaseEntityStore.html |
413dfabe7bac-0 | langchain.memory.chat_message_histories.zep.ZepChatMessageHistory¶
class langchain.memory.chat_message_histories.zep.ZepChatMessageHistory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
api_key=<your_api_key>,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Methods
__init__(session_id[, url, api_key])
add_ai_message(message[, metadata])
Convenience method for adding an AI message string to the store.
add_message(message[, metadata])
Append the message to the Zep memory history
add_user_message(message[, metadata])
Convenience method for adding a human message string to the store.
clear()
Clear session memory from Zep. | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html |
413dfabe7bac-1 | clear()
Clear session memory from Zep.
search(query[, metadata, limit])
Search Zep memory for messages matching the query
Attributes
messages
Retrieve messages from Zep memory
zep_messages
Retrieve summary from Zep memory
zep_summary
Retrieve summary from Zep memory
add_ai_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
metadata – Optional metadata to attach to the message.
add_message(message: BaseMessage, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Append the message to the Zep memory history
add_user_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
metadata – Optional metadata to attach to the message.
clear() → None[source]¶
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
search(query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None) → List[MemorySearchResult][source]¶
Search Zep memory for messages matching the query
property messages: List[langchain.schema.messages.BaseMessage]¶
Retrieve messages from Zep memory
property zep_messages: List[Message]¶
Retrieve summary from Zep memory
property zep_summary: Optional[str]¶
Retrieve summary from Zep memory
Examples using ZepChatMessageHistory¶
Zep | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html |
67cb994b0b60-0 | langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory¶
class langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Bases: BaseChatMessageHistory
Chat message history backed by Azure CosmosDB.
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
Methods
__init__(cosmos_endpoint, cosmos_database, ...)
Initializes a new instance of the CosmosDBChatMessageHistory class.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a self-created message to the store
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from this memory and cosmos. | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html |
67cb994b0b60-1 | clear()
Clear session memory from this memory and cosmos.
load_messages()
Retrieve the messages from Cosmos
prepare_cosmos()
Prepare the CosmosDB client.
upsert_messages()
Update the cosmosdb item.
Attributes
messages
A list of Messages stored in-memory.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from this memory and cosmos.
load_messages() → None[source]¶
Retrieve the messages from Cosmos
prepare_cosmos() → None[source]¶
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages() → None[source]¶
Update the cosmosdb item.
messages: List[BaseMessage]¶
A list of Messages stored in-memory. | https://api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html |
b107793faa15-0 | langchain.load.dump.default¶
langchain.load.dump.default(obj: Any) → Any[source]¶
Return a default value for a Serializable object or
a SerializedNotImplemented object. | https://api.python.langchain.com/en/latest/load/langchain.load.dump.default.html |
b025080e7a7c-0 | langchain.load.serializable.SerializedConstructor¶
class langchain.load.serializable.SerializedConstructor[source]¶
Bases: dict
Serialized constructor.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
kwargs
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html |
b025080e7a7c-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
kwargs: Dict[str, Any]¶
lc: int¶
type: Literal['constructor']¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html |
3b3929eaa7c4-0 | langchain.load.load.loads¶
langchain.load.load.loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶
Load a JSON object from a string.
Parameters
text – The string to load.
secrets_map – A map of secrets to load.
valid_namespaces – A list of additional namespaces (modules)
to allow to be deserialized.
Returns: | https://api.python.langchain.com/en/latest/load/langchain.load.load.loads.html |
c07526b96f3d-0 | langchain.load.dump.dumps¶
langchain.load.dump.dumps(obj: Any, *, pretty: bool = False) → str[source]¶
Return a json string representation of an object. | https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumps.html |
0c4134059e41-0 | langchain.load.dump.dumpd¶
langchain.load.dump.dumpd(obj: Any) → Dict[str, Any][source]¶
Return a json dict representation of an object. | https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumpd.html |
91c4f9282d1c-0 | langchain.load.serializable.Serializable¶
class langchain.load.serializable.Serializable[source]¶
Bases: BaseModel, ABC
Serializable base class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
to_json() → Union[SerializedConstructor, SerializedNotImplemented][source]¶
to_json_not_implemented() → SerializedNotImplemented[source]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html |
8536c0b5c707-0 | langchain.load.serializable.BaseSerialized¶
class langchain.load.serializable.BaseSerialized[source]¶
Bases: TypedDict
Base class for serialized objects.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
lc
id
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html |
8536c0b5c707-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html |
78d5696b3bcd-0 | langchain.load.serializable.to_json_not_implemented¶
langchain.load.serializable.to_json_not_implemented(obj: object) → SerializedNotImplemented[source]¶
Serialize a “not implemented” object.
Parameters
obj – object to serialize
Returns
SerializedNotImplemented | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.to_json_not_implemented.html |
ca0f9dc9ff95-0 | langchain.load.serializable.SerializedNotImplemented¶
class langchain.load.serializable.SerializedNotImplemented[source]¶
Bases: dict
Serialized not implemented.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html |
ca0f9dc9ff95-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶
type: Literal['not_implemented']¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html |
056935bfdd4e-0 | langchain.load.serializable.SerializedSecret¶
class langchain.load.serializable.SerializedSecret[source]¶
Bases: dict
Serialized secret.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
type
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html |
056935bfdd4e-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
id: List[str]¶
lc: int¶
type: Literal['secret']¶ | https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html |
77d31cbdf13c-0 | langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html |
77d31cbdf13c-1 | langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm: BaseChatModel, toolkit: Optional[PowerBIToolkit] = None, powerbi: Optional[PowerBIDataset] = None, callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant should try to create a correct and complete answer to the question from the user. If the user asks a question not related to the dataset it should return "This does not appear to be part of this dataset." as the answer. The user might make a mistake with the spelling of certain values, if you think that is the case, ask the user to confirm the spelling of the value and then run the query again. Unless the user specifies a specific number of examples they wish to obtain, and the results are too large, limit your query to at most {top_k} results, but make it clear when answering which field was used for the filtering. The user has access to these tables: {{tables}}.\n\nThe answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. \n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html |
77d31cbdf13c-2 | blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html |
77d31cbdf13c-3 | Construct a Power BI agent from a Chat LLM and tools.
If you supply only a toolkit and no Power BI dataset, the same LLM is used for both. | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent.html |
e0545b766dec-0 | langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit¶
class langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit(*, vectorstores: List[VectorStoreInfo], llm: BaseLanguageModel = None)[source]¶
Bases: BaseToolkit
Toolkit for routing between Vector Stores.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm: langchain.schema.language_model.BaseLanguageModel [Optional]¶
param vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using VectorStoreRouterToolkit¶
Vectorstore Agent | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit.html |
1bce875bbb4f-0 | langchain.agents.agent.BaseMultiActionAgent¶
class langchain.agents.agent.BaseMultiActionAgent[source]¶
Bases: BaseModel
Base Multi Action Agent class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with the observations.
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Actions specifying what tool to use.
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of agent.
get_allowed_tools() → Optional[List[str]][source]¶
abstract plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with the observations.
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Actions specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None[source]¶
Save the agent.
Parameters | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseMultiActionAgent.html |
1bce875bbb4f-1 | Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict[source]¶
property return_values: List[str]¶
Return values of the agent.
Examples using BaseMultiActionAgent¶
Custom multi-action agent | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.BaseMultiActionAgent.html |
11edc9228e9d-0 | langchain.agents.agent.Agent¶
class langchain.agents.agent.Agent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: BaseSingleActionAgent
Agent that calls the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called “agent_scratchpad” where the agent can put its
intermediary work.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Required]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
abstract classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶
Create a prompt for this class.
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent[source]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.Agent.html |
11edc9228e9d-1 | Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]][source]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict[source]¶
validator validate_prompt » all fields[source]¶
Validate that prompt matches format.
abstract property llm_prefix: str¶
Prefix to append the LLM call with.
abstract property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.Agent.html |
6370aca41a33-0 | langchain.agents.agent_toolkits.openapi.spec.dereference_refs¶
langchain.agents.agent_toolkits.openapi.spec.dereference_refs(spec_obj: dict, full_spec: dict) → Union[dict, list][source]¶
Try to substitute $refs.
The goal is to get the complete docs for each endpoint in context for now.
In the few OpenAPI specs I studied, $refs referenced models
(or in OpenAPI terms, components) and could be nested. This code most
likely misses lots of cases. | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.dereference_refs.html |
efff373c7018-0 | langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit¶
class langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit(*, nla_tools: Sequence[NLATool])[source]¶
Bases: BaseToolkit
Natural Language API Toolkit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]¶
List of API Endpoint Tools.
classmethod from_llm_and_ai_plugin(llm: BaseLanguageModel, ai_plugin: AIPlugin, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) → NLAToolkit[source]¶
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_ai_plugin_url(llm: BaseLanguageModel, ai_plugin_url: str, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) → NLAToolkit[source]¶
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_spec(llm: BaseLanguageModel, spec: OpenAPISpec, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) → NLAToolkit[source]¶
Instantiate the toolkit by creating tools for each operation.
classmethod from_llm_and_url(llm: BaseLanguageModel, open_api_url: str, requests: Optional[Requests] = None, verbose: bool = False, **kwargs: Any) → NLAToolkit[source]¶
Instantiate the toolkit from an OpenAPI Spec URL
get_tools() → List[BaseTool][source]¶
Get the tools for all the API operations.
Examples using NLAToolkit¶
Natural Language APIs | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit.html |
efff373c7018-1 | Examples using NLAToolkit¶
Natural Language APIs
Plug-and-Plai
Custom Agent with PlugIn Retrieval | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit.html |
03493d3a27cc-0 | langchain.agents.initialize.initialize_agent¶
langchain.agents.initialize.initialize_agent(tools: Sequence[BaseTool], llm: BaseLanguageModel, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, *, tags: Optional[Sequence[str]] = None, **kwargs: Any) → AgentExecutor[source]¶
Load an agent executor given tools and LLM.
Parameters
tools – List of tools this agent has access to.
llm – Language model to use as the agent.
agent – Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager – CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path – Path to serialized agent to use.
agent_kwargs – Additional key word arguments to pass to the underlying agent
tags – Tags to apply to the traced runs.
**kwargs – Additional key word arguments passed to the agent executor
Returns
An agent executor
Examples using initialize_agent¶
ChatGPT Plugins
Google Serper API
Human as a tool
OpenWeatherMap API
Search Tools
Zapier Natural Language Actions API
ArXiv API Tool
Metaphor Search
GraphQL tool
Gradio Tools
SceneXplain
Shell Tool
Zep Memory
Dynamodb Chat Message History
Argilla
Streamlit
WandB Tracing
Comet
Aim
Weights & Biases
MLflow
Google Serper
Flyte
ClearML
Log, Trace, and Monitor Langchain LLM Calls
Portkey
Jira
Document Comparison
Azure Cognitive Services Toolkit
Natural Language APIs
Gmail Toolkit
Github Toolkit
PlayWright Browser Toolkit
Office365 Toolkit
Amadeus Toolkit | https://api.python.langchain.com/en/latest/agents/langchain.agents.initialize.initialize_agent.html |
03493d3a27cc-1 | Github Toolkit
PlayWright Browser Toolkit
Office365 Toolkit
Amadeus Toolkit
Amazon API Gateway
Debugging
LangSmith Walkthrough
Comparing Chain Outputs
Agent VectorDB Question Answering Benchmarking
Agent Trajectory
Multi-modal outputs: Image & Text
Agent Debates with Tools
Multiple callback handlers
Multi-Input Tools
Defining Custom Tools
Tool Input Schema
Human-in-the-loop Tool Validation
Self ask with search
ReAct document store
OpenAI Multi Functions Agent
Combine agents and vector stores
Access intermediate steps
Handle parsing errors
Running Agent as an Iterator
Timeouts for agents
Streaming final agent output
Add Memory to OpenAI Functions Agent
Cap the max number of iterations
Custom functions with OpenAI Functions Agent
Async API
Use ToolKits with OpenAI Functions
Human input Chat Model
Fake LLM
Tracking token usage
Human input LLM | https://api.python.langchain.com/en/latest/agents/langchain.agents.initialize.initialize_agent.html |
04f0528ad07e-0 | langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing¶
class langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing(*, name: str = 'requests_patch', description: str = 'Use this when you want to PATCH content on a website.\nInput to the tool should be a json string with 3 keys: "url", "data", and "output_instructions".\nThe value of "url" should be a string.\nThe value of "data" should be a dictionary of key-value pairs of the body params available in the OpenAPI spec you want to PATCH the content with at the url.\nThe value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PATCH request creates.\nAlways use double quotes for strings in the json string.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]¶
Bases: BaseRequestsTool, BaseTool
Requests PATCH tool with LLM-instructed extraction of truncated responses.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Optional[Type[BaseModel]] = None¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html |
04f0528ad07e-1 | param args_schema: Optional[Type[BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Use this when you want to PATCH content on a website.\nInput to the tool should be a json string with 3 keys: "url", "data", and "output_instructions".\nThe value of "url" should be a string.\nThe value of "data" should be a dictionary of key-value pairs of the body params available in the OpenAPI spec you want to PATCH the content with at the url.\nThe value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PATCH request creates.\nAlways use double quotes for strings in the json string.'¶
Tool description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param llm_chain: langchain.chains.llm.LLMChain [Optional]¶
LLMChain used to extract the response.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'requests_patch'¶
Tool name.
param requests_wrapper: TextRequestsWrapper [Required]¶
param response_length: Optional[int] = 5000¶
Maximum length of the response to be returned. | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html |
04f0528ad07e-2 | Maximum length of the response to be returned.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used. | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html |
04f0528ad07e-3 | Raise deprecation warning if callback_manager is used.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing.html |
c9f11087b88e-0 | langchain.agents.conversational_chat.output_parser.ConvoOutputParser¶
class langchain.agents.conversational_chat.output_parser.ConvoOutputParser[source]¶
Bases: AgentOutputParser
Output parser for the conversational agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Returns formatting instructions for the given output parser.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Attempts to parse the given text into an AgentAction or AgentFinish.
Raises
OutputParserException if parsing fails. –
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html |
c9f11087b88e-1 | Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html |
33fc6d172898-0 | langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent¶
class langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Agent for the self-ask-with-search paper.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶
Prompt does not depend on tools.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶
Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html |
33fc6d172898-1 | get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the LLM call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html |
d1c360664bc1-0 | langchain.agents.xml.base.XMLAgentOutputParser¶
class langchain.agents.xml.base.XMLAgentOutputParser[source]¶
Bases: AgentOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the | https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.XMLAgentOutputParser.html |
d1c360664bc1-1 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.XMLAgentOutputParser.html |
2074ab060260-0 | langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo¶
class langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo(*, vectorstore: VectorStore, name: str, description: str)[source]¶
Bases: BaseModel
Information about a VectorStore.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param description: str [Required]¶
param name: str [Required]¶
param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using VectorStoreInfo¶
Vectorstore Agent | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo.html |
9f3d9ec2b9f4-0 | langchain.agents.agent_toolkits.sql.base.create_sql_agent¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
9f3d9ec2b9f4-1 | langchain.agents.agent_toolkits.sql.base.create_sql_agent(llm: BaseLanguageModel, toolkit: SQLDatabaseToolkit, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: Optional[str] = None, format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
9f3d9ec2b9f4-2 | I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
9f3d9ec2b9f4-3 | Construct an SQL agent from an LLM and tools.
Examples using create_sql_agent¶
CnosDB
SQL Database Agent | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
d27e596c947a-0 | langchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent¶
langchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm: BaseLanguageModel, df: Any, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, number_of_head_rows: int = 5, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
Construct a pandas agent from an LLM and dataframe.
Examples using create_pandas_dataframe_agent¶
Pandas Dataframe Agent
!pip install bs4 | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html |
77eee1cab89d-0 | langchain.agents.conversational.output_parser.ConvoOutputParser¶
class langchain.agents.conversational.output_parser.ConvoOutputParser(*, ai_prefix: str = 'AI')[source]¶
Bases: AgentOutputParser
Output parser for the conversational agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
Prefix to use before AI output.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output | https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.output_parser.ConvoOutputParser.html |
77eee1cab89d-1 | prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.conversational.output_parser.ConvoOutputParser.html |
95c4573cd32b-0 | langchain.agents.mrkl.base.ZeroShotAgent¶
class langchain.agents.mrkl.base.ZeroShotAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Agent for the MRKL chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use. | https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html |
95c4573cd32b-1 | **kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → PromptTemplate[source]¶
Create prompt in the style of the zero shot agent.
Parameters
tools – List of tools the agent will have access to, used to format the
prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
input_variables – List of input variables the final prompt will expect.
Returns
A PromptTemplate with the template assembled from the pieces here.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html |
95c4573cd32b-2 | dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → Agent[source]¶
Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use. | https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html |
95c4573cd32b-3 | **kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the llm call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent.
Examples using ZeroShotAgent¶
Jina
BabyAGI with Tools
Adding Message Memory backed by a database to an Agent
How to add Memory to an Agent
Custom MRKL agent
Shared memory across agents and tools | https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.ZeroShotAgent.html |
5be3696f240b-0 | langchain.agents.xml.base.XMLAgent¶
class langchain.agents.xml.base.XMLAgent(*, tools: List[BaseTool], llm_chain: LLMChain)[source]¶
Bases: BaseSingleActionAgent
Agent that uses XML tags.
Parameters
tools – list of tools the agent can choose from
llm_chain – The LLMChain to call to predict the next action
Examples
from langchain.agents import XMLAgent
from langchain
tools = ...
model =
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
Chain to use to predict action.
param tools: List[langchain.tools.base.BaseTool] [Required]¶
List of tools this agent has access to.
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → BaseSingleActionAgent¶
get_allowed_tools() → Optional[List[str]]¶
static get_default_output_parser() → XMLAgentOutputParser[source]¶
static get_default_prompt() → ChatPromptTemplate[source]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.XMLAgent.html |
5be3696f240b-1 | static get_default_prompt() → ChatPromptTemplate[source]¶
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
property return_values: List[str]¶
Return values of the agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.xml.base.XMLAgent.html |
9422b997ed60-0 | langchain.agents.agent_toolkits.openapi.planner.create_openapi_agent¶
langchain.agents.agent_toolkits.openapi.planner.create_openapi_agent(api_spec: ReducedOpenAPISpec, requests_wrapper: TextRequestsWrapper, llm: BaseLanguageModel, shared_memory: Optional[ReadOnlySharedMemory] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = True, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
Instantiate OpenAI API planner and controller for a given spec.
Inject credentials via requests_wrapper.
We use a top-level “orchestrator” agent to invoke the planner and controller,
rather than a top-level planner
that invokes a controller with its plan. This is to keep the planner simple.
Examples using create_openapi_agent¶
OpenAPI agents | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.create_openapi_agent.html |
b6cf34e155e0-0 | langchain.agents.loading.load_agent_from_config¶
langchain.agents.loading.load_agent_from_config(config: dict, llm: Optional[BaseLanguageModel] = None, tools: Optional[List[Tool]] = None, **kwargs: Any) → Union[BaseSingleActionAgent, BaseMultiActionAgent][source]¶
Load agent from Config Dict.
Parameters
config – Config dict to load agent from.
llm – Language model to use as the agent.
tools – List of tools this agent has access to.
**kwargs – Additional key word arguments passed to the agent executor.
Returns
An agent executor. | https://api.python.langchain.com/en/latest/agents/langchain.agents.loading.load_agent_from_config.html |
8e711f97ba46-0 | langchain.agents.agent.AgentOutputParser¶
class langchain.agents.agent.AgentOutputParser[source]¶
Bases: BaseOutputParser
Base class for parsing agent output into agent action/finish.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
abstract parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentOutputParser.html |
8e711f97ba46-1 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
Examples using AgentOutputParser¶
Plug-and-Plai
Wikibase Agent
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Custom Agent with PlugIn Retrieval
Custom agent with tool retrieval | https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentOutputParser.html |
69d5ccdc6a7d-0 | langchain.agents.react.base.ReActDocstoreAgent¶
class langchain.agents.react.base.ReActDocstoreAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Agent for the ReAct chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶
Return default prompt.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶
Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActDocstoreAgent.html |
69d5ccdc6a7d-1 | Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the LLM call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.react.base.ReActDocstoreAgent.html |
b2b99d61f68f-0 | langchain.agents.structured_chat.base.StructuredChatAgent¶
class langchain.agents.structured_chat.base.StructuredChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Structured Chat Agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
Output parser for the agent.
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use. | https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html |
b2b99d61f68f-1 | **kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None) → BasePromptTemplate[source]¶
Create a prompt for this class.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent. | https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html |
b2b99d61f68f-2 | dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None, **kwargs: Any) → Agent[source]¶
Construct an agent from an LLM and tools. | https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.base.StructuredChatAgent.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.