id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
eaa7e7d1c3b2-1
|
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param early_stopping_method: str = 'force'¶
The method to use for early stopping if the agent never
returns AgentFinish. Either ‘force’ or ‘generate’.
“force” returns a string saying that it stopped because it met atime or iteration limit.
“generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps.
param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶
How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error.
sIf true, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agentas an observation.
param max_execution_time: Optional[float] = None¶
The maximum amount of wall clock time to spend in the execution
loop.
param max_iterations: Optional[int] = 15¶
The maximum number of steps to take before ending the execution
loop.
Setting to ‘None’ could lead to an infinite loop.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-2
|
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param return_intermediate_steps: bool = False¶
Whether to return the agent’s trajectory of intermediate steps
at the end in addition to the final output.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tools: Sequence[BaseTool] [Required]¶
The valid tools the agent can call.
param trim_intermediate_steps: Union[int, Callable[[List[Tuple[AgentAction, str]]], List[Tuple[AgentAction, str]]]] = -1¶
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-3
|
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-4
|
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-5
|
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-6
|
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor¶
Create from agent and tools.
classmethod from_chains(llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any) → AgentExecutor[source]¶
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Parameters
llm – The LLM to use as the agent LLM.
chains – The chains the MRKL system has access to.
**kwargs – parameters to be passed to initialization.
Returns
An initialized MRKL chain.
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-7
|
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
iter(inputs: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, include_run_info: bool = False, async_: bool = False) → AgentExecutorIterator¶
Enables iteration over steps taken to reach final output.
lookup_tool(name: str) → BaseTool¶
Lookup tool by name.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-8
|
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
eaa7e7d1c3b2-9
|
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Raise error - saving not supported for Agent Executors.
save_agent(file_path: Union[Path, str]) → None¶
Save the underlying agent.
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_return_direct_tool » all fields¶
Validate that tools are compatible with agent.
validator validate_tools » all fields¶
Validate that tools are compatible with agent.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.mrkl.base.MRKLChain.html
|
b1a753dc9802-0
|
langchain.agents.agent.LLMSingleActionAgent¶
class langchain.agents.agent.LLMSingleActionAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, stop: List[str])[source]¶
Bases: BaseSingleActionAgent
Base class for single action agents.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
LLMChain to use for agent.
param output_parser: langchain.agents.agent.AgentOutputParser [Required]¶
Output parser to use for agent.
param stop: List[str] [Required]¶
List of strings to stop on.
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → BaseSingleActionAgent¶
get_allowed_tools() → Optional[List[str]]¶
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html
|
b1a753dc9802-1
|
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with the observations.
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict[source]¶
property input_keys: List[str]¶
Return the input keys.
Returns
List of input keys.
property return_values: List[str]¶
Return values of the agent.
Examples using LLMSingleActionAgent¶
Plug-and-Plai
Wikibase Agent
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Custom Agent with PlugIn Retrieval
Custom agent with tool retrieval
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html
|
2c1c11c918da-0
|
langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit¶
class langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit(*, api_resource: Resource = None)[source]¶
Bases: BaseToolkit
Toolkit for interacting with Gmail.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_resource: Resource [Optional]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
model Config[source]¶
Bases: object
Pydantic config.
arbitrary_types_allowed = True¶
Examples using GmailToolkit¶
Gmail Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit.html
|
1f05ffbd6b35-0
|
langchain.agents.agent_iterator.BaseAgentExecutorIterator¶
class langchain.agents.agent_iterator.BaseAgentExecutorIterator[source]¶
Bases: ABC
Base class for AgentExecutorIterator.
Methods
__init__()
build_callback_manager()
abstract build_callback_manager() → None[source]¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_iterator.BaseAgentExecutorIterator.html
|
13f6c56f2394-0
|
langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing¶
class langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing(*, name: str = 'requests_get', description: str = 'Use this to GET content from a website.\nInput to the tool should be a json string with 3 keys: "url", "params" and "output_instructions".\nThe value of "url" should be a string. \nThe value of "params" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \nIf parameters are not needed, or not available, leave it empty.\nThe value of "output_instructions" should be instructions on what information to extract from the response, \nfor example the id(s) for a resource(s) that the GET request fetches.\n', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]¶
Bases: BaseRequestsTool, BaseTool
Requests GET tool with LLM-instructed extraction of truncated responses.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Optional[Type[BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html
|
13f6c56f2394-1
|
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Use this to GET content from a website.\nInput to the tool should be a json string with 3 keys: "url", "params" and "output_instructions".\nThe value of "url" should be a string. \nThe value of "params" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \nIf parameters are not needed, or not available, leave it empty.\nThe value of "output_instructions" should be instructions on what information to extract from the response, \nfor example the id(s) for a resource(s) that the GET request fetches.\n'¶
Tool description.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param llm_chain: langchain.chains.llm.LLMChain [Optional]¶
LLMChain used to extract the response.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'requests_get'¶
Tool name.
param requests_wrapper: TextRequestsWrapper [Required]¶
param response_length: Optional[int] = 5000¶
Maximum length of the response to be returned.
param return_direct: bool = False¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html
|
13f6c56f2394-2
|
Maximum length of the response to be returned.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html
|
13f6c56f2394-3
|
Raise deprecation warning if callback_manager is used.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing.html
|
4a92b501585b-0
|
langchain.agents.chat.base.ChatAgent¶
class langchain.agents.chat.base.ChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Chat Agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
Output parser for the agent.
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html
|
4a92b501585b-1
|
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool], system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → BasePromptTemplate[source]¶
Create a prompt for this class.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html
|
4a92b501585b-2
|
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → Agent[source]¶
Construct an agent from an LLM and tools.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html
|
4a92b501585b-3
|
Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the llm call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.chat.base.ChatAgent.html
|
1af701b3dee8-0
|
langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent¶
class langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]¶
Bases: BaseMultiActionAgent
An Agent driven by OpenAIs function powered API.
Parameters
llm – This should be an instance of ChatOpenAI, specifically a model
that supports using functions.
tools – The tools this agent has access to.
prompt – The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
OpenAIMultiFunctionsAgent.create_prompt(…)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶
param tools: Sequence[langchain.tools.base.BaseTool] [Required]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) → BasePromptTemplate[source]¶
Create prompt for this agent.
Parameters
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
|
1af701b3dee8-1
|
Create prompt for this agent.
Parameters
system_message – Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages – Prompt messages that will be placed between the
system message and the new human input.
Returns
A prompt template to pass into this agent.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) → BaseMultiActionAgent[source]¶
Construct an agent from an LLM and tools.
get_allowed_tools() → List[str][source]¶
Get allowed tools.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
|
1af701b3dee8-2
|
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_llm » all fields[source]¶
validator validate_prompt » all fields[source]¶
property functions: List[dict]¶
property input_keys: List[str]¶
Get input keys. Input refers to user input here.
property return_values: List[str]¶
Return values of the agent.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
|
6a67b3202d54-0
|
langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec¶
langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec: dict, dereference: bool = True) → ReducedOpenAPISpec[source]¶
Simplify/distill/minify a spec somehow.
I want a smaller target for retrieval and (more importantly)
I want smaller results from retrieval.
I was hoping https://openapi.tools/ would have some useful bits
to this end, but doesn’t seem so.
Examples using reduce_openapi_spec¶
OpenAPI agents
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec.html
|
88a54252e243-0
|
langchain.agents.agent_iterator.AgentExecutorIterator¶
class langchain.agents.agent_iterator.AgentExecutorIterator(agent_executor: AgentExecutor, inputs: Any, callbacks: Callbacks = None, *, tags: Optional[list[str]] = None, include_run_info: bool = False, async_: bool = False)[source]¶
Bases: BaseAgentExecutorIterator
Iterator for AgentExecutor.
Initialize the AgentExecutorIterator with the given AgentExecutor,
inputs, and optional callbacks.
Methods
__init__(agent_executor, inputs[, ...])
Initialize the AgentExecutorIterator with the given AgentExecutor, inputs, and optional callbacks.
build_callback_manager()
Create and configure the callback manager based on the current callbacks and tags.
raise_stopasynciteration(output)
Raise a StopAsyncIteration exception with the given output.
raise_stopiteration(output)
Raise a StopIteration exception with the given output.
reset()
Reset the iterator to its initial state, clearing intermediate steps, iterations, and time elapsed.
update_iterations()
Increment the number of iterations and update the time elapsed.
Attributes
agent_executor
callback_manager
callbacks
color_mapping
final_outputs
inputs
name_to_tool_map
tags
run_manager
timeout_manager
build_callback_manager() → None[source]¶
Create and configure the callback manager based on the current
callbacks and tags.
async raise_stopasynciteration(output: Any) → NoReturn[source]¶
Raise a StopAsyncIteration exception with the given output.
Close the timeout context manager.
raise_stopiteration(output: Any) → NoReturn[source]¶
Raise a StopIteration exception with the given output.
reset() → None[source]¶
Reset the iterator to its initial state, clearing intermediate steps,
iterations, and time elapsed.
update_iterations() → None[source]¶
Increment the number of iterations and update the time elapsed.
property agent_executor: AgentExecutor¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_iterator.AgentExecutorIterator.html
|
88a54252e243-1
|
Increment the number of iterations and update the time elapsed.
property agent_executor: AgentExecutor¶
property callback_manager: Union[langchain.callbacks.manager.AsyncCallbackManager, langchain.callbacks.manager.CallbackManager]¶
property callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]¶
property color_mapping: dict[str, str]¶
property final_outputs: Optional[dict[str, Any]]¶
property inputs: dict[str, str]¶
property name_to_tool_map: dict[str, langchain.tools.base.BaseTool]¶
run_manager: Optional[Union[langchain.callbacks.manager.AsyncCallbackManagerForChainRun, langchain.callbacks.manager.CallbackManagerForChainRun]]¶
property tags: Optional[List[str]]¶
timeout_manager: Any¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_iterator.AgentExecutorIterator.html
|
31c96c04d2cd-0
|
langchain.agents.agent_toolkits.json.toolkit.JsonToolkit¶
class langchain.agents.agent_toolkits.json.toolkit.JsonToolkit(*, spec: JsonSpec)[source]¶
Bases: BaseToolkit
Toolkit for interacting with a JSON spec.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param spec: langchain.tools.json.tool.JsonSpec [Required]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
Examples using JsonToolkit¶
JSON Agent
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.toolkit.JsonToolkit.html
|
98c6cc84e1f9-0
|
langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit¶
class langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit(*, tools: List[BaseTool] = [])[source]¶
Bases: BaseToolkit
Zapier Toolkit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param tools: List[langchain.tools.base.BaseTool] = []¶
async classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) → ZapierToolkit[source]¶
Create a toolkit from a ZapierNLAWrapper.
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) → ZapierToolkit[source]¶
Create a toolkit from a ZapierNLAWrapper.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
Examples using ZapierToolkit¶
Zapier Natural Language Actions API
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit.html
|
2547042a6e00-0
|
langchain.agents.schema.AgentScratchPadChatPromptTemplate¶
class langchain.agents.schema.AgentScratchPadChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]])[source]¶
Bases: ChatPromptTemplate
Chat prompt template for the agent scratchpad.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
List of input variables in template messages. Used for validation.
param messages: List[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]] [Required]¶
List of messages consisting of either message prompt templates or messages.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format the chat template into a list of finalized messages.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
list of formatted messages
format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
|
2547042a6e00-1
|
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseChatPromptTemplate, BaseMessage, Tuple[str, str], Tuple[Type, str], str]]) → ChatPromptTemplate¶
Create a chat prompt template from a variety of message formats.
Examples
Instantiation from a list of message templates:
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
Parameters
messages – sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., (“human”, “{user_input}”),
(4) 2-tuple of (message class, template), (4) a string which is
shorthand for (“human”, template); e.g., “{user_input}”
Returns
a chat prompt template
classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate¶
Create a chat prompt template from a list of (role, template) tuples.
Parameters
string_messages – list of (role, template) tuples.
Returns
a chat prompt template
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate¶
Create a chat prompt template from a list of (role class, template) tuples.
Parameters
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
|
2547042a6e00-2
|
Create a chat prompt template from a list of (role class, template) tuples.
Parameters
string_messages – list of (role class, template) tuples.
Returns
a chat prompt template
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate¶
Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Parameters
template – template string
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
invoke(input: Dict, config: langchain.schema.runnable.RunnableConfig | None = None) → PromptValue¶
partial(**kwargs: Union[str, Callable[[], str]]) → ChatPromptTemplate¶
Return a new ChatPromptTemplate with some of the input variables already
filled in.
Parameters
**kwargs – keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns
A new ChatPromptTemplate.
Example
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
save(file_path: Union[Path, str]) → None¶
Save prompt to file.
Parameters
file_path – path to file.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_input_variables » all fields¶
Validate input variables.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
|
2547042a6e00-3
|
validator validate_input_variables » all fields¶
Validate input variables.
If input_variables is not set, it will be set to the union of
all input variables in the messages.
Parameters
values – values to validate.
Returns
Validated values.
validator validate_variable_names » all fields¶
Validate variable names do not include restricted names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
|
a75fefbb8d4f-0
|
langchain.agents.loading.load_agent¶
langchain.agents.loading.load_agent(path: Union[str, Path], **kwargs: Any) → Union[BaseSingleActionAgent, BaseMultiActionAgent][source]¶
Unified method for loading an agent from LangChainHub or local fs.
Parameters
path – Path to the agent file.
**kwargs – Additional key word arguments passed to the agent executor.
Returns
An agent executor.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.loading.load_agent.html
|
2d8df1ea7f96-0
|
langchain.agents.load_tools.get_all_tool_names¶
langchain.agents.load_tools.get_all_tool_names() → List[str][source]¶
Get a list of all possible tool names.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.get_all_tool_names.html
|
5cc7921fc7fb-0
|
langchain.agents.agent_toolkits.base.BaseToolkit¶
class langchain.agents.agent_toolkits.base.BaseToolkit[source]¶
Bases: BaseModel, ABC
Base Toolkit representing a collection of related tools.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.base.BaseToolkit.html
|
bb4dc7c23301-0
|
langchain.agents.load_tools.load_tools¶
langchain.agents.load_tools.load_tools(tool_names: List[str], llm: Optional[BaseLanguageModel] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → List[BaseTool][source]¶
Load tools based on their name.
Parameters
tool_names – name of tools to load.
llm – An optional language model, may be needed to initialize certain tools.
callbacks – Optional callback manager or list of callback handlers.
If not provided, default global callback manager will be used.
Returns
List of tools.
Examples using load_tools¶
ChatGPT Plugins
Human as a tool
AWS Lambda API
Requests
OpenWeatherMap API
Search Tools
ArXiv API Tool
GraphQL tool
SceneXplain
Argilla
Streamlit
SerpAPI
WandB Tracing
Comet
Aim
Golden
Weights & Biases
Wolfram Alpha
MLflow
DataForSEO
SearxNG Search API
Google Serper
OpenWeatherMap
Flyte
ClearML
Google Search
Log, Trace, and Monitor Langchain LLM Calls
Portkey
Amazon API Gateway
Debugging
LangSmith Walkthrough
Agent Debates with Tools
Multiple callback handlers
Defining Custom Tools
Human-in-the-loop Tool Validation
Access intermediate steps
Timeouts for agents
Streaming final agent output
Cap the max number of iterations
Async API
Human input Chat Model
Fake LLM
Tracking token usage
Human input LLM
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.load_tools.html
|
c2265c31c3ee-0
|
langchain.agents.agent_toolkits.xorbits.base.create_xorbits_agent¶
langchain.agents.agent_toolkits.xorbits.base.create_xorbits_agent(llm: BaseLLM, data: Any, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = '', suffix: str = '', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
Construct a xorbits agent from an LLM and dataframe.
Examples using create_xorbits_agent¶
Xorbits Agent
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.xorbits.base.create_xorbits_agent.html
|
21292f6b2f66-0
|
langchain.agents.load_tools.load_huggingface_tool¶
langchain.agents.load_tools.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) → BaseTool[source]¶
Loads a tool from the HuggingFace Hub.
Parameters
task_or_repo_id – Task or model repo id.
model_repo_id – Optional model repo id.
token – Optional token.
remote – Optional remote. Defaults to False.
**kwargs –
Returns
A tool.
Examples using load_huggingface_tool¶
Requires transformers>=4.29.0 and huggingface_hub>=0.14.1
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.load_tools.load_huggingface_tool.html
|
383de4a2fabd-0
|
langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit¶
class langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit(*, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]¶
Bases: BaseToolkit
Toolkit for PlayWright browser tools.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param async_browser: Optional['AsyncBrowser'] = None¶
param sync_browser: Optional['SyncBrowser'] = None¶
classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → PlayWrightBrowserToolkit[source]¶
Instantiate the toolkit.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
validator validate_imports_and_browser_provided » all fields[source]¶
Check that the arguments are valid.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
Examples using PlayWrightBrowserToolkit¶
Metaphor Search
PlayWright Browser Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit.html
|
b1ec4631123d-0
|
langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit¶
class langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit(*, root_dir: Optional[str] = None, selected_tools: Optional[List[str]] = None)[source]¶
Bases: BaseToolkit
Toolkit for interacting with a Local Files.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param root_dir: Optional[str] = None¶
If specified, all file operations are made relative to root_dir.
param selected_tools: Optional[List[str]] = None¶
If provided, only provide the selected tools. Defaults to all.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
validator validate_tools » all fields[source]¶
Examples using FileManagementToolkit¶
File System Tools
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit.html
|
ef520b1a2b0d-0
|
langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent¶
class langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]¶
Bases: BaseSingleActionAgent
An Agent driven by OpenAIs function powered API.
Parameters
llm – This should be an instance of ChatOpenAI, specifically a model
that supports using functions.
tools – The tools this agent has access to.
prompt – The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
OpenAIFunctionsAgent.create_prompt(…)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶
param tools: Sequence[langchain.tools.base.BaseTool] [Required]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) → BasePromptTemplate[source]¶
Create prompt for this agent.
Parameters
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
|
ef520b1a2b0d-1
|
Create prompt for this agent.
Parameters
system_message – Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages – Prompt messages that will be placed between the
system message and the new human input.
Returns
A prompt template to pass into this agent.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) → BaseSingleActionAgent[source]¶
Construct an agent from an LLM and tools.
get_allowed_tools() → List[str][source]¶
Get allowed tools.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, with_functions: bool = True, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
|
ef520b1a2b0d-2
|
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_llm » all fields[source]¶
validator validate_prompt » all fields[source]¶
property functions: List[dict]¶
property input_keys: List[str]¶
Get input keys. Input refers to user input here.
property return_values: List[str]¶
Return values of the agent.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
|
2ccdd725e402-0
|
langchain.agents.utils.validate_tools_single_input¶
langchain.agents.utils.validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) → None[source]¶
Validate tools for single input.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.utils.validate_tools_single_input.html
|
64d1c9003f27-0
|
langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser¶
class langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser(*, followups: Sequence[str] = ('Follow up:', 'Followup:'), finish_string: str = 'So the final answer is: ')[source]¶
Bases: AgentOutputParser
Output parser for the self-ask agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param finish_string: str = 'So the final answer is: '¶
param followups: Sequence[str] = ('Follow up:', 'Followup:')¶
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html
|
64d1c9003f27-1
|
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html
|
da56fabf1da0-0
|
langchain.agents.structured_chat.output_parser.StructuredChatOutputParser¶
class langchain.agents.structured_chat.output_parser.StructuredChatOutputParser[source]¶
Bases: AgentOutputParser
Output parser for the structured chat agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html
|
da56fabf1da0-1
|
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html
|
0ef212f449d9-0
|
langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit¶
class langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit(*, account: Account = None)[source]¶
Bases: BaseToolkit
Toolkit for interacting with Office 365.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param account: Account [Optional]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
model Config[source]¶
Bases: object
Pydantic config.
arbitrary_types_allowed = True¶
Examples using O365Toolkit¶
Office365 Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.office365.toolkit.O365Toolkit.html
|
83110efac928-0
|
langchain.agents.tools.InvalidTool¶
class langchain.agents.tools.InvalidTool(*, name: str = 'invalid_tool', description: str = 'Called when tool name is invalid.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]¶
Bases: BaseTool
Tool that is run when invalid tool name is encountered by agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Optional[Type[BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'Called when tool name is invalid.'¶
Description of the tool.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'invalid_tool'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html
|
83110efac928-1
|
param name: str = 'invalid_tool'¶
Name of the tool.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html
|
83110efac928-2
|
Raise deprecation warning if callback_manager is used.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.tools.InvalidTool.html
|
8fb1954d5201-0
|
langchain.agents.agent_toolkits.github.toolkit.GitHubToolkit¶
class langchain.agents.agent_toolkits.github.toolkit.GitHubToolkit(*, tools: List[BaseTool] = [])[source]¶
Bases: BaseToolkit
GitHub Toolkit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param tools: List[langchain.tools.base.BaseTool] = []¶
classmethod from_github_api_wrapper(github_api_wrapper: GitHubAPIWrapper) → GitHubToolkit[source]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
Examples using GitHubToolkit¶
Github Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.github.toolkit.GitHubToolkit.html
|
de82cf01ddcf-0
|
langchain.agents.react.output_parser.ReActOutputParser¶
class langchain.agents.react.output_parser.ReActOutputParser[source]¶
Bases: AgentOutputParser
Output parser for the ReAct agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.react.output_parser.ReActOutputParser.html
|
de82cf01ddcf-1
|
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.react.output_parser.ReActOutputParser.html
|
6fe8b1cce0d0-0
|
langchain.agents.agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit¶
class langchain.agents.agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit[source]¶
Bases: BaseToolkit
Toolkit for Azure Cognitive Services.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
Examples using AzureCognitiveServicesToolkit¶
Azure Cognitive Services Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit.html
|
1fb25bc010e4-0
|
langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing¶
class langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing(*, name: str = 'requests_delete', description: str = 'ONLY USE THIS TOOL WHEN THE USER HAS SPECIFICALLY REQUESTED TO DELETE CONTENT FROM A WEBSITE.\nInput to the tool should be a json string with 2 keys: "url", and "output_instructions".\nThe value of "url" should be a string.\nThe value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the DELETE request creates.\nAlways use double quotes for strings in the json string.\nONLY USE THIS TOOL IF THE USER HAS SPECIFICALLY REQUESTED TO DELETE SOMETHING.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, requests_wrapper: TextRequestsWrapper, response_length: Optional[int] = 5000, llm_chain: LLMChain = None)[source]¶
Bases: BaseRequestsTool, BaseTool
A tool that sends a DELETE request and parses the response.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param args_schema: Optional[Type[BaseModel]] = None¶
Pydantic model class to validate and parse the tool’s input arguments.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html
|
1fb25bc010e4-1
|
Pydantic model class to validate and parse the tool’s input arguments.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated. Please use callbacks instead.
param callbacks: Callbacks = None¶
Callbacks to be called during tool execution.
param description: str = 'ONLY USE THIS TOOL WHEN THE USER HAS SPECIFICALLY REQUESTED TO DELETE CONTENT FROM A WEBSITE.\nInput to the tool should be a json string with 2 keys: "url", and "output_instructions".\nThe value of "url" should be a string.\nThe value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the DELETE request creates.\nAlways use double quotes for strings in the json string.\nONLY USE THIS TOOL IF THE USER HAS SPECIFICALLY REQUESTED TO DELETE SOMETHING.'¶
The description of the tool.
param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶
Handle the content of the ToolException thrown.
param llm_chain: langchain.chains.llm.LLMChain [Optional]¶
The LLM chain used to parse the response.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param name: str = 'requests_delete'¶
The name of the tool.
param requests_wrapper: TextRequestsWrapper [Required]¶
param response_length: Optional[int] = 5000¶
The maximum length of the response.
param return_direct: bool = False¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html
|
1fb25bc010e4-2
|
The maximum length of the response.
param return_direct: bool = False¶
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a tool with its use case.
param verbose: bool = False¶
Whether to log the tool’s progress.
__call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶
Make tool callable.
async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool asynchronously.
invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html
|
1fb25bc010e4-3
|
Raise deprecation warning if callback_manager is used.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run the tool.
property args: dict¶
property is_single_input: bool¶
Whether the tool only accepts a single input.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing.html
|
828ed3262792-0
|
langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit¶
class langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit(*, requests_wrapper: TextRequestsWrapper)[source]¶
Bases: BaseToolkit
Toolkit for making REST requests.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param requests_wrapper: langchain.utilities.requests.TextRequestsWrapper [Required]¶
get_tools() → List[BaseTool][source]¶
Return a list of tools.
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit.html
|
f245f8267ff9-0
|
langchain.agents.agent_toolkits.json.base.create_json_agent¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html
|
f245f8267ff9-1
|
langchain.agents.agent_toolkits.json.base.create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html
|
f245f8267ff9-2
|
to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html
|
f245f8267ff9-3
|
Construct a json agent from an LLM and tools.
Examples using create_json_agent¶
JSON Agent
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html
|
ae1088d68dc3-0
|
langchain.agents.agent_toolkits.amadeus.toolkit.AmadeusToolkit¶
class langchain.agents.agent_toolkits.amadeus.toolkit.AmadeusToolkit(*, client: Client = None)[source]¶
Bases: BaseToolkit
Toolkit for interacting with Office365.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param client: Client [Optional]¶
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
model Config[source]¶
Bases: object
Pydantic config.
arbitrary_types_allowed = True¶
Examples using AmadeusToolkit¶
Amadeus Toolkit
|
https://api.python.langchain.com/en/latest/agents/langchain.agents.agent_toolkits.amadeus.toolkit.AmadeusToolkit.html
|
3b0f455226b9-0
|
langchain.vectorstores.pgembedding.EmbeddingStore¶
class langchain.vectorstores.pgembedding.EmbeddingStore(**kwargs)[source]¶
Bases: BaseModel
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Attributes
cmetadata
collection
collection_id
custom_id
document
embedding
metadata
registry
uuid
cmetadata¶
collection¶
collection_id¶
custom_id¶
document¶
embedding¶
metadata: MetaData = MetaData()¶
registry: RegistryType = <sqlalchemy.orm.decl_api.registry object>¶
uuid¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgembedding.EmbeddingStore.html
|
4c4d16ddab1a-0
|
langchain.vectorstores.qdrant.QdrantException¶
class langchain.vectorstores.qdrant.QdrantException[source]¶
Bases: Exception
Base class for all the Qdrant related exceptions
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.QdrantException.html
|
6e9f27b0044a-0
|
langchain.vectorstores.pgvector.BaseModel¶
class langchain.vectorstores.pgvector.BaseModel(**kwargs: Any)[source]¶
Bases: Base
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Attributes
metadata
registry
uuid
metadata: MetaData = MetaData()¶
registry: RegistryType = <sqlalchemy.orm.decl_api.registry object>¶
uuid = Column(None, UUID(), table=None, primary_key=True, nullable=False, default=CallableColumnDefault(<function uuid4>))¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pgvector.BaseModel.html
|
56878616507f-0
|
langchain.vectorstores.docarray.base.DocArrayIndex¶
class langchain.vectorstores.docarray.base.DocArrayIndex(doc_index: BaseDocIndex, embedding: Embeddings)[source]¶
Bases: VectorStore, ABC
Initialize a vector store from DocArray’s DocIndex.
Methods
__init__(doc_index, embedding)
Initialize a vector store from DocArray's DocIndex.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
56878616507f-1
|
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Return docs most similar to query.
Attributes
doc_cls
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
56878616507f-2
|
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
56878616507f-3
|
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
56878616507f-4
|
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
56878616507f-5
|
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text and
cosine distance in float for each.
Lower score represents more similarity.
property doc_cls: Type[BaseDoc]¶
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.base.DocArrayIndex.html
|
7d113011462b-0
|
langchain.vectorstores.alibabacloud_opensearch.create_metadata¶
langchain.vectorstores.alibabacloud_opensearch.create_metadata(fields: Dict[str, Any]) → Dict[str, Any][source]¶
Create metadata from fields.
Parameters
fields – The fields of the document. The fields must be a dict.
Returns
The metadata of the document. The metadata must be a dict.
Return type
metadata
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.create_metadata.html
|
58659c43c8f0-0
|
langchain.vectorstores.myscale.MyScaleSettings¶
class langchain.vectorstores.myscale.MyScaleSettings(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, host: str = 'localhost', port: int = 8443, username: Optional[str] = None, password: Optional[str] = None, index_type: str = 'IVFFLAT', index_param: Optional[Dict[str, str]] = None, column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}, database: str = 'default', table: str = 'langchain', metric: str = 'cosine')[source]¶
Bases: BaseSettings
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to ‘default’.
table (str) : Table name to operate on.
Defaults to ‘vector_table’.
metric (str)Metric to compute distance,supported are (‘l2’, ‘cosine’, ‘ip’). Defaults to ‘cosine’.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html
|
58659c43c8f0-1
|
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{‘id’: ‘text_id’,
‘vector’: ‘text_embedding’,
‘text’: ‘text_plain’,
‘metadata’: ‘metadata_dictionary_in_json’,
}
Defaults to identity map.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}¶
param database: str = 'default'¶
param host: str = 'localhost'¶
param index_param: Optional[Dict[str, str]] = None¶
param index_type: str = 'IVFFLAT'¶
param metric: str = 'cosine'¶
param password: Optional[str] = None¶
param port: int = 8443¶
param table: str = 'langchain'¶
param username: Optional[str] = None¶
model Config[source]¶
Bases: object
env_file = '.env'¶
env_file_encoding = 'utf-8'¶
env_prefix = 'myscale_'¶
Examples using MyScaleSettings¶
MyScale
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.myscale.MyScaleSettings.html
|
bdd5e5002271-0
|
langchain.vectorstores.starrocks.debug_output¶
langchain.vectorstores.starrocks.debug_output(s: Any) → None[source]¶
Print a debug message if DEBUG is True.
:param s: The message to print
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.debug_output.html
|
2a5d81a6be10-0
|
langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch¶
class langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(embedding: Embeddings, config: AlibabaCloudOpenSearchSettings, **kwargs: Any)[source]¶
Bases: VectorStore
Alibaba Cloud OpenSearch Vector Store
Methods
__init__(embedding, config, **kwargs)
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
create_results(json_result)
create_results_with_score(json_result)
delete([ids])
Delete by vector ID or other criteria.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-1
|
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding[, ids, ...])
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, config])
Return VectorStore initialized from texts and embeddings.
inner_embedding_query(embedding[, ...])
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, search_filter])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-2
|
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-3
|
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
create_results(json_result: Dict[str, Any]) → List[Document][source]¶
create_results_with_score(json_result: Dict[str, Any]) → List[Tuple[Document, float]][source]¶
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, ids: Optional[List[str]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) → AlibabaCloudOpenSearch[source]¶
Return VectorStore initialized from documents and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-4
|
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, config: Optional[AlibabaCloudOpenSearchSettings] = None, **kwargs: Any) → AlibabaCloudOpenSearch[source]¶
Return VectorStore initialized from texts and embeddings.
inner_embedding_query(embedding: List[float], search_filter: Optional[Dict[str, Any]] = None, k: int = 4) → Dict[str, Any][source]¶
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-5
|
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, search_filter: Optional[Dict[str, Any]] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, search_filter: Optional[dict] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
2a5d81a6be10-6
|
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using AlibabaCloudOpenSearch¶
Alibaba Cloud OpenSearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch.html
|
e6faae9925cf-0
|
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch¶
class langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: Embeddings)[source]¶
Bases: DocArrayIndex
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install “langchain[docarray]”.
Initialize a vector store from DocArray’s DocIndex.
Methods
__init__(doc_index, embedding)
Initialize a vector store from DocArray's DocIndex.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-1
|
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_params(embedding[, metric])
Initialize DocArrayInMemorySearch store.
from_texts(texts, embedding[, metadatas])
Create an DocArrayInMemorySearch store and insert data.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Return docs most similar to query.
Attributes
doc_cls
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-2
|
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-3
|
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_params(embedding: Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) → DocArrayInMemorySearch[source]¶
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) – Embedding function.
metric (str) – metric for exact nearest-neighbor search.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-4
|
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
**kwargs – Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) → DocArrayInMemorySearch[source]¶
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) – Metadata for each text
if it exists. Defaults to None.
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
Returns
DocArrayInMemorySearch Vector Store
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-5
|
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
e6faae9925cf-6
|
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text and
cosine distance in float for each.
Lower score represents more similarity.
property doc_cls: Type[BaseDoc]¶
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using DocArrayInMemorySearch¶
DocArrayInMemorySearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch.html
|
2a945de6c212-0
|
langchain.vectorstores.singlestoredb.SingleStoreDB¶
class langchain.vectorstores.singlestoredb.SingleStoreDB(embedding: Embeddings, *, distance_strategy: DistanceStrategy = DistanceStrategy.DOT_PRODUCT, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any)[source]¶
Bases: VectorStore
This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the singlestoredb
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
Initialize with necessary components.
Parameters
embedding (Embeddings) – A text embedding model.
distance_strategy (DistanceStrategy, optional) – Determines the strategy employed for calculating
the distance between vectors in the embedding space.
Defaults to DOT_PRODUCT.
Available options are:
- DOT_PRODUCT: Computes the scalar product of two vectors.
This is the default behavior
EUCLIDEAN_DISTANCE: Computes the Euclidean distance betweentwo vectors. This metric considers the geometric distance in
the vector space, and might be more suitable for embeddings
that rely on spatial relationships.
table_name (str, optional) – Specifies the name of the table in use.
Defaults to “embeddings”.
content_field (str, optional) – Specifies the field to store the content.
Defaults to “content”.
metadata_field (str, optional) – Specifies the field to store metadata.
Defaults to “metadata”.
vector_field (str, optional) – Specifies the field to store the vector.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
2a945de6c212-1
|
vector_field (str, optional) – Specifies the field to store the vector.
Defaults to “vector”.
pool (Following arguments pertain to the connection) –
pool_size (int, optional) – Determines the number of active connections in
the pool. Defaults to 5.
max_overflow (int, optional) – Determines the maximum number of connections
allowed beyond the pool_size. Defaults to 10.
timeout (float, optional) – Specifies the maximum wait time in seconds for
establishing a connection. Defaults to 30.
connection (database) –
host (str, optional) – Specifies the hostname, IP address, or URL for the
database connection. The default scheme is “mysql”.
user (str, optional) – Database username.
password (str, optional) – Database password.
port (int, optional) – Database port. Defaults to 3306 for non-HTTP
connections, 80 for HTTP connections, and 443 for HTTPS connections.
database (str, optional) – Database name.
the (Additional optional arguments provide further customization over) –
connection –
pure_python (bool, optional) – Toggles the connector mode. If True,
operates in pure Python mode.
local_infile (bool, optional) – Allows local file uploads.
charset (str, optional) – Specifies the character set for string values.
ssl_key (str, optional) – Specifies the path of the file containing the SSL
key.
ssl_cert (str, optional) – Specifies the path of the file containing the SSL
certificate.
ssl_ca (str, optional) – Specifies the path of the file containing the SSL
certificate authority.
ssl_cipher (str, optional) – Sets the SSL cipher list.
ssl_disabled (bool, optional) – Disables SSL usage.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
2a945de6c212-2
|
ssl_disabled (bool, optional) – Disables SSL usage.
ssl_verify_cert (bool, optional) – Verifies the server’s certificate.
Automatically enabled if ssl_ca is specified.
ssl_verify_identity (bool, optional) – Verifies the server’s identity.
conv (dict[int, Callable], optional) – A dictionary of data conversion
functions.
credential_type (str, optional) – Specifies the type of authentication to
use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.
autocommit (bool, optional) – Enables autocommits.
results_type (str, optional) – Determines the structure of the query results:
tuples, namedtuples, dicts.
results_format (str, optional) – Deprecated. This option has been renamed to
results_type.
Examples
Basic Usage:
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
vectorstore = SingleStoreDB(
OpenAIEmbeddings(),
host="https://user:password@127.0.0.1:3306/database"
)
Advanced Usage:
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
vectorstore = SingleStoreDB(
OpenAIEmbeddings(),
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
host="127.0.0.1",
port=3306,
user="user",
password="password",
database="db",
table_name="my_custom_table",
pool_size=10,
timeout=60,
)
Using environment variables:
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SingleStoreDB
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
2a945de6c212-3
|
from langchain.vectorstores import SingleStoreDB
os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'
vectorstore = SingleStoreDB(OpenAIEmbeddings())
Methods
__init__(embedding, *[, distance_strategy, ...])
Initialize with necessary components.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, embeddings])
Add more texts to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
2a945de6c212-4
|
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Create a SingleStoreDB vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new table for the embeddings in SingleStoreDB. 3. Adds the documents to the newly created table. This is intended to be a quick way to get started. .. rubric:: Example.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter])
Returns the most similar indexed documents to the query text.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, filter])
Return docs most similar to query.
Attributes
embeddings
Access the query embedding object if available.
vector_field
Pass the rest of the kwargs to the connection.
connection_kwargs
Add program name and version to connection attributes.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
2a945de6c212-5
|
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) → List[str][source]¶
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) – Optional pre-generated
embeddings. Defaults to None.
Returns
empty list
Return type
List[str]
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.singlestoredb.SingleStoreDB.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.