id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
8e7952d3fdad-23
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GmailGetMessage(*, name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=<class 'langchain.tools.gmail.get_message.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source] Bases: langchain.tools.gmail.base.GmailBaseTool Parameters name (str) – description (str) – args_schema (Type[langchain.tools.gmail.get_message.SearchArgsSchema]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_resource (Resource) – Return type None attribute args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = <class 'langchain.tools.gmail.get_message.SearchArgsSchema'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'get_gmail_message' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-24
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GmailGetThread(*, name='get_gmail_thread', description='Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.', args_schema=<class 'langchain.tools.gmail.get_thread.GetThreadSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source] Bases: langchain.tools.gmail.base.GmailBaseTool Parameters name (str) – description (str) – args_schema (Type[langchain.tools.gmail.get_thread.GetThreadSchema]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_resource (Resource) – Return type None attribute args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = <class 'langchain.tools.gmail.get_thread.GetThreadSchema'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'get_gmail_thread' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-25
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GmailSearch(*, name='search_gmail', description='Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.', args_schema=<class 'langchain.tools.gmail.search.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source] Bases: langchain.tools.gmail.base.GmailBaseTool Parameters name (str) – description (str) – args_schema (Type[langchain.tools.gmail.search.SearchArgsSchema]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_resource (Resource) – Return type None attribute args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = <class 'langchain.tools.gmail.search.SearchArgsSchema'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'search_gmail' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-26
The unique name of the tool that clearly communicates its purpose. class langchain.tools.GmailSendMessage(*, name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source] Bases: langchain.tools.gmail.base.GmailBaseTool Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_resource (Resource) – Return type None attribute description: str = 'Use this tool to send email messages. The input is the message, recipents' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'send_gmail_message' The unique name of the tool that clearly communicates its purpose. class langchain.tools.GooglePlacesTool(*, name='google_places', description='A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.', args_schema=<class 'langchain.tools.google_places.tool.GooglePlacesSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-27
Bases: langchain.tools.base.BaseTool Tool that adds the capability to query the Google places API. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.google_places_api.GooglePlacesAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional] attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.google_places.tool.GooglePlacesSchema'> Pydantic model class to validate and parse the tool’s input arguments. class langchain.tools.GoogleSearchResults(*, name='Google Search Results JSON', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that has capability to query the Google Search API and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-28
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – num_results (int) – api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required] attribute num_results: int = 4 class langchain.tools.GoogleSearchRun(*, name='google_search', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query the Google search API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-29
attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required] class langchain.tools.GoogleSerperResults(*, name='Google Serrper Results JSON', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that has capability to query the Serper.dev Google Search API and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional] class langchain.tools.GoogleSerperRun(*, name='google_serper', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query the Serper.dev Google search API. Parameters
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-30
Tool that adds the capability to query the Serper.dev Google search API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required] class langchain.tools.HumanInputRun(*, name='human', description='You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, prompt_func=None, input_func=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to ask user for input. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-31
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – prompt_func (Callable[[str], None]) – input_func (Callable) – Return type None attribute input_func: Callable [Optional] attribute prompt_func: Callable[[str], None] [Optional] class langchain.tools.IFTTTWebhook(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, url)[source] Bases: langchain.tools.base.BaseTool IFTTT Webhook. Parameters name (str) – name of the tool description (str) – description of the tool url (str) – url to hit with the json event. args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – Return type None attribute url: str [Required]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-32
Return type None attribute url: str [Required] class langchain.tools.InfoPowerBITool(*, name='schema_powerbi', description='\nΒ Β Β  Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β  Be sure that the tables actually exist by calling list_tables_powerbi first!\n\nΒ Β Β  Example Input: "table1, table2, table3"\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source] Bases: langchain.tools.base.BaseTool Tool for getting metadata about a PowerBI Dataset. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – powerbi (langchain.utilities.powerbi.PowerBIDataset) – Return type None attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required] class langchain.tools.InfoSQLDatabaseTool(*, name='sql_db_schema', description='\nΒ Β Β  Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.Β Β Β  \n\nΒ Β Β  Example Input: "table1, table2, table3"\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-33
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool Tool for getting metadata about a SQL database. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.sql_database.SQLDatabase) – Return type None class langchain.tools.InfoSparkSQLTool(*, name='schema_sql_db', description='\nΒ Β Β  Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β  Be sure that the tables actually exist by calling list_tables_sql_db first!\n\nΒ Β Β  Example Input: "table1, table2, table3"\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source] Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool Tool for getting metadata about a Spark SQL. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-34
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.utilities.spark_sql.SparkSQL) – Return type None class langchain.tools.JiraAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, mode)[source] Bases: langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.jira.JiraAPIWrapper) – mode (str) – Return type None attribute api_wrapper: langchain.utilities.jira.JiraAPIWrapper [Optional] attribute mode: str [Required]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-35
attribute mode: str [Required] class langchain.tools.JsonGetValueTool(*, name='json_spec_get_value', description='\nΒ Β Β  Can be used to see value in string format at a given path.\nΒ Β Β  Before calling this you should be SURE that the path to this exists.\nΒ Β Β  The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source] Bases: langchain.tools.base.BaseTool Tool for getting a value in a JSON spec. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – spec (langchain.tools.json.tool.JsonSpec) – Return type None attribute spec: JsonSpec [Required]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-36
Return type None attribute spec: JsonSpec [Required] class langchain.tools.JsonListKeysTool(*, name='json_spec_list_keys', description='\nΒ Β Β  Can be used to list all keys at a given path. \nΒ Β Β  Before calling this you should be SURE that the path to this exists.\nΒ Β Β  The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source] Bases: langchain.tools.base.BaseTool Tool for listing keys in a JSON spec. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – spec (langchain.tools.json.tool.JsonSpec) – Return type None attribute spec: JsonSpec [Required] class langchain.tools.ListDirectoryTool(*, name='list_directory', description='List files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-37
Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'List files and directories in a specified folder' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'list_directory' The unique name of the tool that clearly communicates its purpose. class langchain.tools.ListPowerBITool(*, name='list_tables_powerbi', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source] Bases: langchain.tools.base.BaseTool Tool for getting tables names. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-38
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – powerbi (langchain.utilities.powerbi.PowerBIDataset) – Return type None attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required] class langchain.tools.ListSQLDatabaseTool(*, name='sql_db_list_tables', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source] Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool Tool for getting tables names. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.sql_database.SQLDatabase) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-39
db (langchain.sql_database.SQLDatabase) – Return type None class langchain.tools.ListSparkSQLTool(*, name='list_tables_sql_db', description='Input is an empty string, output is a comma separated list of tables in the Spark SQL.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source] Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool Tool for getting tables names. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.utilities.spark_sql.SparkSQL) – Return type None class langchain.tools.MetaphorSearchResults(*, name='metaphor_search_results_json', description='A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that has capability to query the Metaphor Search API and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-40
args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required] class langchain.tools.MoveFileTool(*, name='move_file', description='Move or rename a file from one location to another', args_schema=<class 'langchain.tools.file_management.move.FileMoveInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-41
root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Move or rename a file from one location to another' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'move_file' The unique name of the tool that clearly communicates its purpose. class langchain.tools.NavigateBackTool(*, name='previous_webpage', description='Navigate back to the previous page in the browser history', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Navigate back to the previous page in the browser history. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-42
Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Navigate back to the previous page in the browser history' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'previous_webpage' The unique name of the tool that clearly communicates its purpose. class langchain.tools.NavigateTool(*, name='navigate_browser', description='Navigate a browser to the specified URL', args_schema=<class 'langchain.tools.playwright.navigate.NavigateToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source] Bases: langchain.tools.playwright.base.BaseBrowserTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Navigate a browser to the specified URL' Used to tell the model how/when/why to use the tool.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-43
Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'navigate_browser' The unique name of the tool that clearly communicates its purpose. class langchain.tools.OpenAPISpec(*, openapi='3.1.0', info, jsonSchemaDialect=None, servers=[Server(url='/', description=None, variables=None)], paths=None, webhooks=None, components=None, security=None, tags=None, externalDocs=None)[source] Bases: openapi_schema_pydantic.v3.v3_1_0.open_api.OpenAPI OpenAPI Model that removes misformatted parts of the spec. Parameters openapi (str) – info (openapi_schema_pydantic.v3.v3_1_0.info.Info) – jsonSchemaDialect (Optional[str]) – servers (List[openapi_schema_pydantic.v3.v3_1_0.server.Server]) – paths (Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]]) – webhooks (Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]]) – components (Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components]) – security (Optional[List[Dict[str, List[str]]]]) – tags (Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]]) – externalDocs (Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation]) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-44
Return type None classmethod from_file(path)[source] Get an OpenAPI spec from a file path. Parameters path (Union[str, pathlib.Path]) – Return type langchain.utilities.openapi.OpenAPISpec classmethod from_spec_dict(spec_dict)[source] Get an OpenAPI spec from a dict. Parameters spec_dict (dict) – Return type langchain.utilities.openapi.OpenAPISpec classmethod from_text(text)[source] Get an OpenAPI spec from a text. Parameters text (str) – Return type langchain.utilities.openapi.OpenAPISpec classmethod from_url(url)[source] Get an OpenAPI spec from a URL. Parameters url (str) – Return type langchain.utilities.openapi.OpenAPISpec static get_cleaned_operation_id(operation, path, method)[source] Get a cleaned operation id from an operation id. Parameters operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) – path (str) – method (str) – Return type str get_methods_for_path(path)[source] Return a list of valid methods for the specified path. Parameters path (str) – Return type List[str] get_operation(path, method)[source] Get the operation object for a given path and HTTP method. Parameters path (str) – method (str) – Return type openapi_schema_pydantic.v3.v3_1_0.operation.Operation get_parameters_for_operation(operation)[source] Get the components for a given operation. Parameters operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) – Return type
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-45
Return type List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter] get_parameters_for_path(path)[source] Parameters path (str) – Return type List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter] get_referenced_schema(ref)[source] Get a schema (or nested reference) or err. Parameters ref (openapi_schema_pydantic.v3.v3_1_0.reference.Reference) – Return type openapi_schema_pydantic.v3.v3_1_0.schema.Schema get_request_body_for_operation(operation)[source] Get the request body for a given operation. Parameters operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) – Return type Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody] get_schema(schema)[source] Parameters schema (Union[openapi_schema_pydantic.v3.v3_1_0.reference.Reference, openapi_schema_pydantic.v3.v3_1_0.schema.Schema]) – Return type openapi_schema_pydantic.v3.v3_1_0.schema.Schema classmethod parse_obj(obj)[source] Parameters obj (dict) – Return type langchain.utilities.openapi.OpenAPISpec property base_url: str Get the base url.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-46
property base_url: str Get the base url. class langchain.tools.OpenWeatherMapQueryRun(*, name='OpenWeatherMap', description='A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query using the OpenWeatherMap API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional] class langchain.tools.PubmedQueryRun(*, name='PubMed', description='A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-47
Bases: langchain.tools.base.BaseTool Tool that adds the capability to search using the PubMed API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.pupmed.PubMedAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional] class langchain.tools.PythonAstREPLTool(*, name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, globals=None, locals=None, sanitize_input=True)[source] Bases: langchain.tools.base.BaseTool A tool for running python code in a REPL. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-48
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – globals (Optional[Dict]) – locals (Optional[Dict]) – sanitize_input (bool) – Return type None attribute globals: Optional[Dict] [Optional] attribute locals: Optional[Dict] [Optional] attribute sanitize_input: bool = True class langchain.tools.PythonREPLTool(*, name='Python_REPL', description='A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, python_repl=None, sanitize_input=True)[source] Bases: langchain.tools.base.BaseTool A tool for running python code in a REPL. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – python_repl (langchain.utilities.python.PythonREPL) – sanitize_input (bool) – Return type None attribute python_repl: langchain.utilities.python.PythonREPL [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-49
attribute python_repl: langchain.utilities.python.PythonREPL [Optional] attribute sanitize_input: bool = True class langchain.tools.QueryCheckerTool(*, name='query_checker_sql_db', description='\nΒ Β Β  Use this tool to double check if your query is correct before executing it.\nΒ Β Β  Always use this tool before executing a query with query_sql_db!\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source] Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool Use an LLM to check if a query is correct. Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/ Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-50
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.utilities.spark_sql.SparkSQL) – template (str) – llm (langchain.base_language.BaseLanguageModel) – llm_chain (langchain.chains.llm.LLMChain) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute llm_chain: langchain.chains.llm.LLMChain [Required] attribute template: str = '\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-51
class langchain.tools.QueryPowerBITool(*, name='query_powerbi', description='\nΒ Β Β  Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n\nΒ Β Β  Example Input: "How many rows are in table1?"\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, llm_chain, powerbi, template='\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-52
<table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] =
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-53
each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-54
MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n', examples='\nQuestion: How many rows are in the table <table>?\nDAX:
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-55
examples='\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n', session_cache=None, max_iterations=5)[source]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-56
Bases: langchain.tools.base.BaseTool Tool for querying a Power BI Dataset. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – llm_chain (langchain.chains.llm.LLMChain) – powerbi (langchain.utilities.powerbi.PowerBIDataset) – template (Optional[str]) – examples (Optional[str]) – session_cache (Dict[str, Any]) – max_iterations (int) – Return type None attribute examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n' attribute llm_chain: langchain.chains.llm.LLMChain [Required] attribute max_iterations: int = 5
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-57
attribute max_iterations: int = 5 attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required] attribute session_cache: Dict[str, Any] [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-58
attribute template: Optional[str] = '\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-59
ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-60
you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-61
date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n'
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-62
class langchain.tools.QuerySQLCheckerTool(*, name='sql_db_query_checker', description='\nΒ Β Β  Use this tool to double check if your query is correct before executing it.\nΒ Β Β  Always use this tool before executing a query with query_sql_db!\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source] Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool Use an LLM to check if a query is correct. Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/ Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.sql_database.SQLDatabase) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-63
db (langchain.sql_database.SQLDatabase) – template (str) – llm (langchain.base_language.BaseLanguageModel) – llm_chain (langchain.chains.llm.LLMChain) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute llm_chain: langchain.chains.llm.LLMChain [Required] attribute template: str = '\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.' class langchain.tools.QuerySQLDataBaseTool(*, name='sql_db_query', description='\nΒ Β Β  Input to this tool is a detailed and correct SQL query, output is a result from the database.\nΒ Β Β  If the query is not correct, an error message will be returned.\nΒ Β Β  If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source] Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool Tool for querying a SQL database. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-64
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.sql_database.SQLDatabase) – Return type None class langchain.tools.QuerySparkSQLTool(*, name='query_sql_db', description='\nΒ Β Β  Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\nΒ Β Β  If the query is not correct, an error message will be returned.\nΒ Β Β  If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source] Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool Tool for querying a Spark SQL. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – db (langchain.utilities.spark_sql.SparkSQL) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-65
db (langchain.utilities.spark_sql.SparkSQL) – Return type None class langchain.tools.ReadFileTool(*, name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Read file from disk' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'read_file' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-66
The unique name of the tool that clearly communicates its purpose. class langchain.tools.RequestsDeleteTool(*, name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source] Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool Tool for making a DELETE request to an API endpoint. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None class langchain.tools.RequestsGetTool(*, name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be aΒ  url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source] Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool Tool for making a GET request to an API endpoint. Parameters
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-67
Tool for making a GET request to an API endpoint. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None class langchain.tools.RequestsPatchTool(*, name='requests_patch', description='Use this when you want to PATCH to a website.\nΒ Β Β  Input should be a json string with two keys: "url" and "data".\nΒ Β Β  The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β  key-value pairs you want to PATCH to the url.\nΒ Β Β  Be careful to always use double quotes for strings in the json string\nΒ Β Β  The output will be the text response of the PATCH request.\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source] Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool Tool for making a PATCH request to an API endpoint. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-68
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None class langchain.tools.RequestsPostTool(*, name='requests_post', description='Use this when you want to POST to a website.\nΒ Β Β  Input should be a json string with two keys: "url" and "data".\nΒ Β Β  The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β  key-value pairs you want to POST to the url.\nΒ Β Β  Be careful to always use double quotes for strings in the json string\nΒ Β Β  The output will be the text response of the POST request.\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source] Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool Tool for making a POST request to an API endpoint. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-69
requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None class langchain.tools.RequestsPutTool(*, name='requests_put', description='Use this when you want to PUT to a website.\nΒ Β Β  Input should be a json string with two keys: "url" and "data".\nΒ Β Β  The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β  key-value pairs you want to PUT to the url.\nΒ Β Β  Be careful to always use double quotes for strings in the json string.\nΒ Β Β  The output will be the text response of the PUT request.\nΒ Β Β  ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source] Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool Tool for making a PUT request to an API endpoint. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-70
requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None class langchain.tools.SceneXplainTool(*, name='image_explainer', description='An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to explain images. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.scenexplain.SceneXplainAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional] class langchain.tools.SearxSearchResults(*, name='Searx Search Results', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, num_results=4, kwargs=None, **extra_data)[source]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-71
Bases: langchain.tools.base.BaseTool Tool that has the capability to query a Searx instance and get back json. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – wrapper (langchain.utilities.searx_search.SearxSearchWrapper) – num_results (int) – kwargs (dict) – extra_data (Any) – Return type None attribute kwargs: dict [Optional] attribute num_results: int = 4 attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required] class langchain.tools.SearxSearchRun(*, name='searx_search', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, kwargs=None)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query a Searx instance. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-72
return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – wrapper (langchain.utilities.searx_search.SearxSearchWrapper) – kwargs (dict) – Return type None attribute kwargs: dict [Optional] attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required] class langchain.tools.ShellTool(*, name='terminal', description='Run shell commands on this Linux machine.', args_schema=<class 'langchain.tools.shell.tool.ShellInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, process=None)[source] Bases: langchain.tools.base.BaseTool Tool to run shell commands. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – process (langchain.utilities.bash.BashProcess) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'> Schema for input arguments.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-73
Schema for input arguments. attribute description: str = 'Run shell commands on this Linux machine.' Description of tool. attribute name: str = 'terminal' Name of tool. attribute process: langchain.utilities.bash.BashProcess [Optional] Bash process to run commands. class langchain.tools.SleepTool(*, name='sleep', description='Make agent sleep for a specified number of seconds.', args_schema=<class 'langchain.tools.sleep.tool.SleepInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to sleep. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.sleep.tool.SleepInput'> Pydantic model class to validate and parse the tool’s input arguments. langchain.tools.StdInInquireTool(*args, **kwargs)[source] Tool for asking the user for input. Parameters args (Any) – kwargs (Any) – Return type langchain.tools.human.tool.HumanInputRun
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-74
Return type langchain.tools.human.tool.HumanInputRun class langchain.tools.SteamshipImageGenerationTool(*, name='GenerateImage', description='Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, model_name, size='512x512', steamship, return_urls=False)[source] Bases: langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – model_name (langchain.tools.steamship_image_generation.tool.ModelName) – size (Optional[str]) – steamship (Steamship) – return_urls (Optional[bool]) – Return type None attribute model_name: ModelName [Required] attribute return_urls: Optional[bool] = False attribute size: Optional[str] = '512x512' attribute steamship: Steamship [Required] class langchain.tools.StructuredTool(*, name, description='', args_schema, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, func, coroutine=None)[source] Bases: langchain.tools.base.BaseTool
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-75
Bases: langchain.tools.base.BaseTool Tool that can operate on any number of inputs. Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – func (Callable[[...], Any]) – coroutine (Optional[Callable[[...], Awaitable[Any]]]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] [Required] The input arguments’ schema. The tool schema. attribute coroutine: Optional[Callable[[...], Awaitable[Any]]] = None The asynchronous version of the function. attribute description: str = '' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute func: Callable[[...], Any] [Required] The function to run when the tool is called. classmethod from_function(func, name=None, description=None, return_direct=False, args_schema=None, infer_schema=True, **kwargs)[source] Create tool from a given function. A classmethod that helps to create a tool from a function. Parameters func (Callable) – The function from which to create a tool name (Optional[str]) – The name of the tool. Defaults to the function name description (Optional[str]) – The description of the tool. Defaults to the function docstring
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-76
description (Optional[str]) – The description of the tool. Defaults to the function docstring return_direct (bool) – Whether to return the result directly or as a callback args_schema (Optional[Type[pydantic.main.BaseModel]]) – The schema of the tool’s input arguments infer_schema (bool) – Whether to infer the schema from the function’s signature **kwargs – Additional arguments to pass to the tool kwargs (Any) – Returns The tool Return type langchain.tools.base.StructuredTool Examples … code-block:: python def add(a: int, b: int) -> int:β€œβ€β€Add two numbers””” return a + b tool = StructuredTool.from_function(add) tool.run(1, 2) # 3 property args: dict The tool’s input arguments. class langchain.tools.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source] Bases: langchain.tools.base.BaseTool Tool that takes in function or coroutine directly. Parameters name (str) – func (Callable[[...], str]) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – coroutine (Optional[Callable[[...], Awaitable[str]]]) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-77
Return type None attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None Pydantic model class to validate and parse the tool’s input arguments. attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None Deprecated. Please use callbacks instead. attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None Callbacks to be called during tool execution. attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = None The asynchronous version of the function. attribute description: str = '' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute func: Callable[[...], str] [Required] The function to run when the tool is called. attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False Handle the content of the ToolException thrown. attribute name: str [Required] The unique name of the tool that clearly communicates its purpose. attribute return_direct: bool = False Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. attribute verbose: bool = False Whether to log the tool’s progress. classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source] Initialize tool from a function. Parameters func (Callable) – name (str) – description (str) – return_direct (bool) – args_schema (Optional[Type[pydantic.main.BaseModel]]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-78
args_schema (Optional[Type[pydantic.main.BaseModel]]) – kwargs (Any) – Return type langchain.tools.base.Tool property args: dict The tool’s input arguments. class langchain.tools.VectorStoreQATool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source] Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool Tool for the VectorDBQA chain. To be initialized with name and chain. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – vectorstore (langchain.vectorstores.base.VectorStore) – llm (langchain.base_language.BaseLanguageModel) – Return type None static get_description(name, description)[source] Parameters name (str) – description (str) – Return type str class langchain.tools.VectorStoreQAWithSourcesTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source] Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-79
Tool for the VectorDBQAWithSources chain. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – vectorstore (langchain.vectorstores.base.VectorStore) – llm (langchain.base_language.BaseLanguageModel) – Return type None static get_description(name, description)[source] Parameters name (str) – description (str) – Return type str class langchain.tools.WikipediaQueryRun(*, name='Wikipedia', description='A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to search using the Wikipedia API. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-80
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.wikipedia.WikipediaAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required] class langchain.tools.WolframAlphaQueryRun(*, name='wolfram_alpha', description='A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source] Bases: langchain.tools.base.BaseTool Tool that adds the capability to query using the Wolfram Alpha SDK. Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper) – Return type None attribute api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-81
class langchain.tools.WriteFileTool(*, name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source] Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Type[pydantic.main.BaseModel]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – root_dir (Optional[str]) – Return type None attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'> Pydantic model class to validate and parse the tool’s input arguments. attribute description: str = 'Write file to disk' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute name: str = 'write_file' The unique name of the tool that clearly communicates its purpose.
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-82
The unique name of the tool that clearly communicates its purpose. class langchain.tools.YouTubeSearchTool(*, name='youtube_search', description='search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source] Bases: langchain.tools.base.BaseTool Parameters name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – Return type None
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-83
Return type None class langchain.tools.ZapierNLAListActions(*, name='ZapierNLA_list_actions', description='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\'s exposed actions.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source] Bases: langchain.tools.base.BaseTool Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{β€œid”: str, β€œdescription”: str, β€œparams”: Dict[str, str]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-84
β€œdescription”: str, β€œparams”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see β€œunderstanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) Parameters None – name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – Return type None attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-85
class langchain.tools.ZapierNLARunAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, action_id, params=None, base_prompt='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description, params_schema=None)[source] Bases: langchain.tools.base.BaseTool Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. Parameters
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-86
tokens) making it safe to inject into the prompt of another LLM call. Parameters action_id (str) – a specific action ID (from list actions) of the action to execute (the set api_key must be associated with the action owner) instructions – a natural language instruction string for using the action (eg. β€œget the latest email from Mike Knoop” for β€œGmail: find email” action) params (Optional[dict]) – a dict, optional. Any params provided will override AI guesses from instructions (see β€œunderstanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) name (str) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – base_prompt (str) – zapier_description (str) – params_schema (Dict[str, str]) – Return type None attribute action_id: str [Required] attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-87
attribute base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}' attribute params: Optional[dict] = None attribute params_schema: Dict[str, str] [Optional] attribute zapier_description: str [Required] langchain.tools.format_tool_to_openai_function(tool)[source] Format tool into the OpenAI function API. Parameters tool (langchain.tools.base.BaseTool) – Return type langchain.tools.convert_to_openai.FunctionDescription langchain.tools.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source] Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct (bool) – Whether to return directly from the tool rather
https://api.python.langchain.com/en/latest/modules/tools.html
8e7952d3fdad-88
return_direct (bool) – Whether to return directly from the tool rather than continuing the agent loop. args_schema (Optional[Type[pydantic.main.BaseModel]]) – optional argument schema for user to specify infer_schema (bool) – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. args (Union[str, Callable]) – Return type Callable Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return
https://api.python.langchain.com/en/latest/modules/tools.html
afece4aea064-0
Experimental This module contains experimental modules and reproductions of existing work using LangChain primitives. Autonomous agents Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module. class langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source] Bases: langchain.chains.base.Chain, pydantic.main.BaseModel Controller model for the BabyAGI agent. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – task_list (collections.deque) – task_creation_chain (langchain.chains.base.Chain) – task_prioritization_chain (langchain.chains.base.Chain) – execution_chain (langchain.chains.base.Chain) – task_id_counter (int) – vectorstore (langchain.vectorstores.base.VectorStore) – max_iterations (Optional[int]) – Return type None model Config[source] Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True property input_keys: List[str] Input keys this chain expects. property output_keys: List[str] Output keys this chain expects. get_next_task(result, task_description, objective)[source] Get the next task. Parameters result (str) – task_description (str) –
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-1
Parameters result (str) – task_description (str) – objective (str) – Return type List[Dict] prioritize_tasks(this_task_id, objective)[source] Prioritize tasks. Parameters this_task_id (int) – objective (str) – Return type List[Dict] execute_task(objective, task, k=5)[source] Execute a task. Parameters objective (str) – task (str) – k (int) – Return type str classmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source] Initialize the BabyAGI Controller. Parameters llm (langchain.base_language.BaseLanguageModel) – vectorstore (langchain.vectorstores.base.VectorStore) – verbose (bool) – task_execution_chain (Optional[langchain.chains.base.Chain]) – kwargs (Dict[str, Any]) – Return type langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI class langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source] Bases: object Agent class for interacting with Auto-GPT. Parameters ai_name (str) – memory (VectorStoreRetriever) – chain (LLMChain) – output_parser (BaseAutoGPTOutputParser) – tools (List[BaseTool]) – feedback_tool (Optional[HumanInputRun]) – chat_history_memory (Optional[BaseChatMessageHistory]) – Generative agents Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-2
class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source] Bases: pydantic.main.BaseModel A character with memory and innate characteristics. Parameters name (str) – age (Optional[int]) – traits (str) – status (str) – memory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) – llm (langchain.base_language.BaseLanguageModel) – verbose (bool) – summary (str) – summary_refresh_seconds (int) – last_refreshed (datetime.datetime) – daily_summaries (List[str]) – Return type None attribute name: str [Required] The character’s name. attribute age: Optional[int] = None The optional age of the character. attribute traits: str = 'N/A' Permanent traits to ascribe to the character. attribute status: str [Required] The traits of the character you wish not to change. attribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required] The memory object that combines relevance, recency, and β€˜importance’. attribute llm: langchain.base_language.BaseLanguageModel [Required] The underlying language model. attribute summary: str = '' Stateful self-summary generated via reflection on the character’s memory. attribute summary_refresh_seconds: int = 3600 How frequently to re-generate the summary. attribute last_refreshed: datetime.datetime [Optional] The last time the character’s summary was regenerated.
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-3
The last time the character’s summary was regenerated. attribute daily_summaries: List[str] [Optional] Summary of the events in the plan that the agent took. model Config[source] Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True summarize_related_memories(observation)[source] Summarize memories that are most relevant to an observation. Parameters observation (str) – Return type str generate_reaction(observation, now=None)[source] React to a given observation. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type Tuple[bool, str] generate_dialogue_response(observation, now=None)[source] React to a given observation. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type Tuple[bool, str] get_summary(force_refresh=False, now=None)[source] Return a descriptive summary of the agent. Parameters force_refresh (bool) – now (Optional[datetime.datetime]) – Return type str get_full_header(force_refresh=False, now=None)[source] Return a full header of the agent’s status, summary, and current time. Parameters force_refresh (bool) – now (Optional[datetime.datetime]) – Return type str
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-4
now (Optional[datetime.datetime]) – Return type str class langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source] Bases: langchain.schema.BaseMemory Parameters llm (langchain.base_language.BaseLanguageModel) – memory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) – verbose (bool) – reflection_threshold (Optional[float]) – current_plan (List[str]) – importance_weight (float) – aggregate_importance (float) – max_tokens_limit (int) – queries_key (str) – most_recent_memories_token_key (str) – add_memory_key (str) – relevant_memories_key (str) – relevant_memories_simple_key (str) – most_recent_memories_key (str) – now_key (str) – reflecting (bool) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Required] The core language model. attribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required] The retriever to fetch related memories. attribute reflection_threshold: Optional[float] = None
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-5
attribute reflection_threshold: Optional[float] = None When aggregate_importance exceeds reflection_threshold, stop to reflect. attribute current_plan: List[str] = [] The current plan of the agent. attribute importance_weight: float = 0.15 How much weight to assign the memory importance. attribute aggregate_importance: float = 0.0 Track the sum of the β€˜importance’ of recent memories. Triggers reflection when it reaches reflection_threshold. pause_to_reflect(now=None)[source] Reflect on recent observations and generate β€˜insights’. Parameters now (Optional[datetime.datetime]) – Return type List[str] add_memories(memory_content, now=None)[source] Add an observations or memories to the agent’s memory. Parameters memory_content (str) – now (Optional[datetime.datetime]) – Return type List[str] add_memory(memory_content, now=None)[source] Add an observation or memory to the agent’s memory. Parameters memory_content (str) – now (Optional[datetime.datetime]) – Return type List[str] fetch_memories(observation, now=None)[source] Fetch related memories. Parameters observation (str) – now (Optional[datetime.datetime]) – Return type List[langchain.schema.Document] property memory_variables: List[str] Input keys this memory class will load dynamically. load_memory_variables(inputs)[source] Return key-value pairs given the text input to the chain. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Save the context of this model run to memory. Parameters inputs (Dict[str, Any]) –
https://api.python.langchain.com/en/latest/modules/experimental.html
afece4aea064-6
Parameters inputs (Dict[str, Any]) – outputs (Dict[str, Any]) – Return type None clear()[source] Clear memory contents. Return type None
https://api.python.langchain.com/en/latest/modules/experimental.html
81099a94f427-0
Chains Chains are easily reusable components which can be linked together. class langchain.chains.APIChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_answer_chain, requests_wrapper, api_docs, question_key='question', output_key='output')[source] Bases: langchain.chains.base.Chain Chain that makes API calls and summarizes the responses to answer a question. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – api_request_chain (langchain.chains.llm.LLMChain) – api_answer_chain (langchain.chains.llm.LLMChain) – requests_wrapper (langchain.requests.TextRequestsWrapper) – api_docs (str) – question_key (str) – output_key (str) – Return type None attribute api_answer_chain: LLMChain [Required] attribute api_docs: str [Required] attribute api_request_chain: LLMChain [Required] attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute memory: Optional[BaseMemory] = None
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-1
for full details. attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute requests_wrapper: TextRequestsWrapper [Required] attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-2
use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-3
Parameters kwargs (Any) – Return type Dict classmethod from_llm_and_api_docs(llm, api_docs, headers=None, api_url_prompt=PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt=PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs)[source] Load chain from just an LLM and the api docs. Parameters llm (langchain.base_language.BaseLanguageModel) – api_docs (str) – headers (Optional[dict]) – api_url_prompt (langchain.prompts.base.BasePromptTemplate) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-4
api_url_prompt (langchain.prompts.base.BasePromptTemplate) – api_response_prompt (langchain.prompts.base.BasePromptTemplate) – kwargs (Any) – Return type langchain.chains.api.base.APIChain prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-5
constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.AnalyzeDocumentChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_document', text_splitter=None, combine_docs_chain)[source] Bases: langchain.chains.base.Chain Chain that splits documents, then analyzes it in pieces. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – input_key (str) – text_splitter (langchain.text_splitter.TextSplitter) – combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) – Return type None attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-6
Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required] attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute text_splitter: langchain.text_splitter.TextSplitter [Optional] attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-7
chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str]
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-8
return_only_outputs (bool) – Return type Dict[str, str] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-9
property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.ChatVectorDBChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, vectorstore, top_k_docs_for_context=4, search_kwargs=None)[source] Bases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain Chain for chatting with a vector database. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) – question_generator (langchain.chains.llm.LLMChain) – output_key (str) – return_source_documents (bool) – return_generated_question (bool) – get_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) – vectorstore (langchain.vectorstores.base.VectorStore) – top_k_docs_for_context (int) – search_kwargs (dict) – Return type None attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain,
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-10
Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute combine_docs_chain: BaseCombineDocumentsChain [Required] attribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None Return the source documents. attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute output_key: str = 'answer' attribute question_generator: LLMChain [Required] attribute return_generated_question: bool = False attribute return_source_documents: bool = False attribute search_kwargs: dict [Optional] attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute top_k_docs_for_context: int = 4 attribute vectorstore: VectorStore [Required] attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-11
Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-12
Parameters kwargs (Any) – Return type Dict classmethod from_llm(llm, vectorstore, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source] Load chain from LLM. Parameters llm (langchain.base_language.BaseLanguageModel) – vectorstore (langchain.vectorstores.base.VectorStore) – condense_question_prompt (langchain.prompts.base.BasePromptTemplate) – chain_type (str) – combine_docs_chain_kwargs (Optional[Dict]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-13
Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property input_keys: List[str] Input keys. property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.ConstitutionalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chain, constitutional_principles, critique_chain, revision_chain, return_intermediate_steps=False)[source] Bases: langchain.chains.base.Chain
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-14
Bases: langchain.chains.base.Chain Chain for applying constitutional principles. Example from langchain.llms import OpenAI from langchain.chains import LLMChain, ConstitutionalChain from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple llm = OpenAI() qa_prompt = PromptTemplate( template="Q: {question} A:", input_variables=["question"], ) qa_chain = LLMChain(llm=llm, prompt=qa_prompt) constitutional_chain = ConstitutionalChain.from_llm( llm=llm, chain=qa_chain, constitutional_principles=[ ConstitutionalPrinciple( critique_request="Tell if this answer is good.", revision_request="Give a better answer.", ) ], ) constitutional_chain.run(question="What is the meaning of life?") Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – chain (langchain.chains.llm.LLMChain) – constitutional_principles (List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]) – critique_chain (langchain.chains.llm.LLMChain) – revision_chain (langchain.chains.llm.LLMChain) – return_intermediate_steps (bool) – Return type None attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-15
Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute chain: langchain.chains.llm.LLMChain [Required] attribute constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required] attribute critique_chain: langchain.chains.llm.LLMChain [Required] attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute return_intermediate_steps: bool = False attribute revision_chain: langchain.chains.llm.LLMChain [Required] attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-16
Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-17
classmethod from_llm(llm, chain, critique_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-18
are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-19
I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-20
Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-21
toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-22
harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-23
I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-24
Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-25
Create a chain from an LLM. Parameters llm (langchain.base_language.BaseLanguageModel) – chain (langchain.chains.llm.LLMChain) – critique_prompt (langchain.prompts.base.BasePromptTemplate) – revision_prompt (langchain.prompts.base.BasePromptTemplate) – kwargs (Any) – Return type langchain.chains.constitutional_ai.base.ConstitutionalChain classmethod get_principles(names=None)[source] Parameters names (Optional[List[str]]) – Return type List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-26
chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property input_keys: List[str] Defines the input keys. property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. property output_keys: List[str] Defines the output keys. class langchain.chains.ConversationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True), llm, output_key='response', output_parser=None, return_final_only=True, llm_kwargs=None, input_key='input')[source] Bases: langchain.chains.llm.LLMChain
https://api.python.langchain.com/en/latest/modules/chains.html