id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
8e7952d3fdad-23 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailGetMessage(*, name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=<class 'langchain.tools.gmail.get_message.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.get_message.SearchArgsSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = <class 'langchain.tools.gmail.get_message.SearchArgsSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_gmail_message'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-24 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailGetThread(*, name='get_gmail_thread', description='Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.', args_schema=<class 'langchain.tools.gmail.get_thread.GetThreadSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.get_thread.GetThreadSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = <class 'langchain.tools.gmail.get_thread.GetThreadSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'get_gmail_thread'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-25 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailSearch(*, name='search_gmail', description='Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.', args_schema=<class 'langchain.tools.gmail.search.SearchArgsSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[langchain.tools.gmail.search.SearchArgsSchema]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = <class 'langchain.tools.gmail.search.SearchArgsSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'search_gmail'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-26 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GmailSendMessage(*, name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]ο
Bases: langchain.tools.gmail.base.GmailBaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_resource (Resource) β
Return type
None
attribute description: str = 'Use this tool to send email messages. The input is the message, recipents'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'send_gmail_message'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.GooglePlacesTool(*, name='google_places', description='A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.', args_schema=<class 'langchain.tools.google_places.tool.GooglePlacesSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-27 | Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Google places API.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_places_api.GooglePlacesAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]ο
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.google_places.tool.GooglePlacesSchema'>ο
Pydantic model class to validate and parse the toolβs input arguments.
class langchain.tools.GoogleSearchResults(*, name='Google Search Results JSON', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Google Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-28 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
num_results (int) β
api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]ο
attribute num_results: int = 4ο
class langchain.tools.GoogleSearchRun(*, name='google_search', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Google search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-29 | attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]ο
class langchain.tools.GoogleSerperResults(*, name='Google Serrper Results JSON', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Serper.dev Google Search API
and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]ο
class langchain.tools.GoogleSerperRun(*, name='google_serper', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query the Serper.dev Google search API.
Parameters | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-30 | Tool that adds the capability to query the Serper.dev Google search API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]ο
class langchain.tools.HumanInputRun(*, name='human', description='You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, prompt_func=None, input_func=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to ask user for input.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-31 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
prompt_func (Callable[[str], None]) β
input_func (Callable) β
Return type
None
attribute input_func: Callable [Optional]ο
attribute prompt_func: Callable[[str], None] [Optional]ο
class langchain.tools.IFTTTWebhook(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, url)[source]ο
Bases: langchain.tools.base.BaseTool
IFTTT Webhook.
Parameters
name (str) β name of the tool
description (str) β description of the tool
url (str) β url to hit with the json event.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute url: str [Required]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-32 | Return type
None
attribute url: str [Required]ο
class langchain.tools.InfoPowerBITool(*, name='schema_powerbi', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β Be sure that the tables actually exist by calling list_tables_powerbi first!\n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting metadata about a PowerBI Dataset.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
Return type
None
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
class langchain.tools.InfoSQLDatabaseTool(*, name='sql_db_schema', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.Β Β Β \n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-33 | Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for getting metadata about a SQL database.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.InfoSparkSQLTool(*, name='schema_sql_db', description='\nΒ Β Β Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\nΒ Β Β Be sure that the tables actually exist by calling list_tables_sql_db first!\n\nΒ Β Β Example Input: "table1, table2, table3"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for getting metadata about a Spark SQL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-34 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.JiraAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, mode)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.jira.JiraAPIWrapper) β
mode (str) β
Return type
None
attribute api_wrapper: langchain.utilities.jira.JiraAPIWrapper [Optional]ο
attribute mode: str [Required]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-35 | attribute mode: str [Required]ο
class langchain.tools.JsonGetValueTool(*, name='json_spec_get_value', description='\nΒ Β Β Can be used to see value in string format at a given path.\nΒ Β Β Before calling this you should be SURE that the path to this exists.\nΒ Β Β The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting a value in a JSON spec.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: JsonSpec [Required]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-36 | Return type
None
attribute spec: JsonSpec [Required]ο
class langchain.tools.JsonListKeysTool(*, name='json_spec_list_keys', description='\nΒ Β Β Can be used to list all keys at a given path. \nΒ Β Β Before calling this you should be SURE that the path to this exists.\nΒ Β Β The input is a text representation of the path to the dict in Python syntax (e.g. data["key1"][0]["key2"]).\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for listing keys in a JSON spec.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: JsonSpec [Required]ο
class langchain.tools.ListDirectoryTool(*, name='list_directory', description='List files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-37 | Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'List files and directories in a specified folder'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'list_directory'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.ListPowerBITool(*, name='list_tables_powerbi', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]ο
Bases: langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-38 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
Return type
None
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
class langchain.tools.ListSQLDatabaseTool(*, name='sql_db_list_tables', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-39 | db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.ListSparkSQLTool(*, name='list_tables_sql_db', description='Input is an empty string, output is a comma separated list of tables in the Spark SQL.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for getting tables names.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.MetaphorSearchResults(*, name='metaphor_search_results_json', description='A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that has capability to query the Metaphor Search API and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-40 | args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]ο
class langchain.tools.MoveFileTool(*, name='move_file', description='Move or rename a file from one location to another', args_schema=<class 'langchain.tools.file_management.move.FileMoveInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-41 | root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Move or rename a file from one location to another'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'move_file'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.NavigateBackTool(*, name='previous_webpage', description='Navigate back to the previous page in the browser history', args_schema=<class 'pydantic.main.BaseModel'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Navigate back to the previous page in the browser history.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-42 | Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Navigate back to the previous page in the browser history'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'previous_webpage'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.NavigateTool(*, name='navigate_browser', description='Navigate a browser to the specified URL', args_schema=<class 'langchain.tools.playwright.navigate.NavigateToolInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.tools.playwright.base.BaseBrowserTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Navigate a browser to the specified URL'ο
Used to tell the model how/when/why to use the tool. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-43 | Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'navigate_browser'ο
The unique name of the tool that clearly communicates its purpose.
class langchain.tools.OpenAPISpec(*, openapi='3.1.0', info, jsonSchemaDialect=None, servers=[Server(url='/', description=None, variables=None)], paths=None, webhooks=None, components=None, security=None, tags=None, externalDocs=None)[source]ο
Bases: openapi_schema_pydantic.v3.v3_1_0.open_api.OpenAPI
OpenAPI Model that removes misformatted parts of the spec.
Parameters
openapi (str) β
info (openapi_schema_pydantic.v3.v3_1_0.info.Info) β
jsonSchemaDialect (Optional[str]) β
servers (List[openapi_schema_pydantic.v3.v3_1_0.server.Server]) β
paths (Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]]) β
webhooks (Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]]) β
components (Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components]) β
security (Optional[List[Dict[str, List[str]]]]) β
tags (Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]]) β
externalDocs (Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-44 | Return type
None
classmethod from_file(path)[source]ο
Get an OpenAPI spec from a file path.
Parameters
path (Union[str, pathlib.Path]) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_spec_dict(spec_dict)[source]ο
Get an OpenAPI spec from a dict.
Parameters
spec_dict (dict) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_text(text)[source]ο
Get an OpenAPI spec from a text.
Parameters
text (str) β
Return type
langchain.utilities.openapi.OpenAPISpec
classmethod from_url(url)[source]ο
Get an OpenAPI spec from a URL.
Parameters
url (str) β
Return type
langchain.utilities.openapi.OpenAPISpec
static get_cleaned_operation_id(operation, path, method)[source]ο
Get a cleaned operation id from an operation id.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
path (str) β
method (str) β
Return type
str
get_methods_for_path(path)[source]ο
Return a list of valid methods for the specified path.
Parameters
path (str) β
Return type
List[str]
get_operation(path, method)[source]ο
Get the operation object for a given path and HTTP method.
Parameters
path (str) β
method (str) β
Return type
openapi_schema_pydantic.v3.v3_1_0.operation.Operation
get_parameters_for_operation(operation)[source]ο
Get the components for a given operation.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
Return type | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-45 | Return type
List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]
get_parameters_for_path(path)[source]ο
Parameters
path (str) β
Return type
List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]
get_referenced_schema(ref)[source]ο
Get a schema (or nested reference) or err.
Parameters
ref (openapi_schema_pydantic.v3.v3_1_0.reference.Reference) β
Return type
openapi_schema_pydantic.v3.v3_1_0.schema.Schema
get_request_body_for_operation(operation)[source]ο
Get the request body for a given operation.
Parameters
operation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) β
Return type
Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody]
get_schema(schema)[source]ο
Parameters
schema (Union[openapi_schema_pydantic.v3.v3_1_0.reference.Reference, openapi_schema_pydantic.v3.v3_1_0.schema.Schema]) β
Return type
openapi_schema_pydantic.v3.v3_1_0.schema.Schema
classmethod parse_obj(obj)[source]ο
Parameters
obj (dict) β
Return type
langchain.utilities.openapi.OpenAPISpec
property base_url: strο
Get the base url. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-46 | property base_url: strο
Get the base url.
class langchain.tools.OpenWeatherMapQueryRun(*, name='OpenWeatherMap', description='A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query using the OpenWeatherMap API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]ο
class langchain.tools.PubmedQueryRun(*, name='PubMed', description='A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-47 | Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the PubMed API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.pupmed.PubMedAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]ο
class langchain.tools.PythonAstREPLTool(*, name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, globals=None, locals=None, sanitize_input=True)[source]ο
Bases: langchain.tools.base.BaseTool
A tool for running python code in a REPL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-48 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
globals (Optional[Dict]) β
locals (Optional[Dict]) β
sanitize_input (bool) β
Return type
None
attribute globals: Optional[Dict] [Optional]ο
attribute locals: Optional[Dict] [Optional]ο
attribute sanitize_input: bool = Trueο
class langchain.tools.PythonREPLTool(*, name='Python_REPL', description='A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, python_repl=None, sanitize_input=True)[source]ο
Bases: langchain.tools.base.BaseTool
A tool for running python code in a REPL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
python_repl (langchain.utilities.python.PythonREPL) β
sanitize_input (bool) β
Return type
None
attribute python_repl: langchain.utilities.python.PythonREPL [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-49 | attribute python_repl: langchain.utilities.python.PythonREPL [Optional]ο
attribute sanitize_input: bool = Trueο
class langchain.tools.QueryCheckerTool(*, name='query_checker_sql_db', description='\nΒ Β Β Use this tool to double check if your query is correct before executing it.\nΒ Β Β Always use this tool before executing a query with query_sql_db!\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Use an LLM to check if a query is correct.
Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-50 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
template (str) β
llm (langchain.base_language.BaseLanguageModel) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute template: str = '\n{query}\nDouble check the Spark SQL query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-51 | class langchain.tools.QueryPowerBITool(*, name='query_powerbi', description='\nΒ Β Β Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n\nΒ Β Β Example Input: "How many rows are in table1?"\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, llm_chain, powerbi, template='\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-52 | <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-53 | each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-54 | MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n', examples='\nQuestion: How many rows are in the table <table>?\nDAX: | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-55 | examples='\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n', session_cache=None, max_iterations=5)[source]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-56 | Bases: langchain.tools.base.BaseTool
Tool for querying a Power BI Dataset.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
llm_chain (langchain.chains.llm.LLMChain) β
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
template (Optional[str]) β
examples (Optional[str]) β
session_cache (Dict[str, Any]) β
max_iterations (int) β
Return type
None
attribute examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n'ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute max_iterations: int = 5ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-57 | attribute max_iterations: int = 5ο
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
attribute session_cache: Dict[str, Any] [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-58 | attribute template: Optional[str] = '\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-59 | ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-60 | you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-61 | date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n'ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-62 | class langchain.tools.QuerySQLCheckerTool(*, name='sql_db_query_checker', description='\nΒ Β Β Use this tool to double check if your query is correct before executing it.\nΒ Β Β Always use this tool before executing a query with query_sql_db!\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Use an LLM to check if a query is correct.
Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-63 | db (langchain.sql_database.SQLDatabase) β
template (str) β
llm (langchain.base_language.BaseLanguageModel) β
llm_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute template: str = '\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'ο
class langchain.tools.QuerySQLDataBaseTool(*, name='sql_db_query', description='\nΒ Β Β Input to this tool is a detailed and correct SQL query, output is a result from the database.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool
Tool for querying a SQL database.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-64 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.sql_database.SQLDatabase) β
Return type
None
class langchain.tools.QuerySparkSQLTool(*, name='query_sql_db', description='\nΒ Β Β Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\nΒ Β Β If the query is not correct, an error message will be returned.\nΒ Β Β If an error is returned, rewrite the query, check the query, and try again.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]ο
Bases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool
Tool for querying a Spark SQL.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-65 | db (langchain.utilities.spark_sql.SparkSQL) β
Return type
None
class langchain.tools.ReadFileTool(*, name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Read file from disk'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'read_file'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-66 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.RequestsDeleteTool(*, name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a DELETE request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsGetTool(*, name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be aΒ url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a GET request to an API endpoint.
Parameters | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-67 | Tool for making a GET request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPatchTool(*, name='requests_patch', description='Use this when you want to PATCH to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to PATCH to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string\nΒ Β Β The output will be the text response of the PATCH request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a PATCH request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-68 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPostTool(*, name='requests_post', description='Use this when you want to POST to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to POST to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string\nΒ Β Β The output will be the text response of the POST request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a POST request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-69 | requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.RequestsPutTool(*, name='requests_put', description='Use this when you want to PUT to a website.\nΒ Β Β Input should be a json string with two keys: "url" and "data".\nΒ Β Β The value of "url" should be a string, and the value of "data" should be a dictionary of \nΒ Β Β key-value pairs you want to PUT to the url.\nΒ Β Β Be careful to always use double quotes for strings in the json string.\nΒ Β Β The output will be the text response of the PUT request.\nΒ Β Β ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]ο
Bases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool
Tool for making a PUT request to an API endpoint.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-70 | requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
class langchain.tools.SceneXplainTool(*, name='image_explainer', description='An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to explain images.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.scenexplain.SceneXplainAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]ο
class langchain.tools.SearxSearchResults(*, name='Searx Search Results', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, num_results=4, kwargs=None, **extra_data)[source]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-71 | Bases: langchain.tools.base.BaseTool
Tool that has the capability to query a Searx instance and get back json.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
wrapper (langchain.utilities.searx_search.SearxSearchWrapper) β
num_results (int) β
kwargs (dict) β
extra_data (Any) β
Return type
None
attribute kwargs: dict [Optional]ο
attribute num_results: int = 4ο
attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]ο
class langchain.tools.SearxSearchRun(*, name='searx_search', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, kwargs=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query a Searx instance.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-72 | return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
wrapper (langchain.utilities.searx_search.SearxSearchWrapper) β
kwargs (dict) β
Return type
None
attribute kwargs: dict [Optional]ο
attribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]ο
class langchain.tools.ShellTool(*, name='terminal', description='Run shell commands on this Linux machine.', args_schema=<class 'langchain.tools.shell.tool.ShellInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, process=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool to run shell commands.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
process (langchain.utilities.bash.BashProcess) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'>ο
Schema for input arguments. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-73 | Schema for input arguments.
attribute description: str = 'Run shell commands on this Linux machine.'ο
Description of tool.
attribute name: str = 'terminal'ο
Name of tool.
attribute process: langchain.utilities.bash.BashProcess [Optional]ο
Bash process to run commands.
class langchain.tools.SleepTool(*, name='sleep', description='Make agent sleep for a specified number of seconds.', args_schema=<class 'langchain.tools.sleep.tool.SleepInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to sleep.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.sleep.tool.SleepInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
langchain.tools.StdInInquireTool(*args, **kwargs)[source]ο
Tool for asking the user for input.
Parameters
args (Any) β
kwargs (Any) β
Return type
langchain.tools.human.tool.HumanInputRun | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-74 | Return type
langchain.tools.human.tool.HumanInputRun
class langchain.tools.SteamshipImageGenerationTool(*, name='GenerateImage', description='Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, model_name, size='512x512', steamship, return_urls=False)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
model_name (langchain.tools.steamship_image_generation.tool.ModelName) β
size (Optional[str]) β
steamship (Steamship) β
return_urls (Optional[bool]) β
Return type
None
attribute model_name: ModelName [Required]ο
attribute return_urls: Optional[bool] = Falseο
attribute size: Optional[str] = '512x512'ο
attribute steamship: Steamship [Required]ο
class langchain.tools.StructuredTool(*, name, description='', args_schema, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, func, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-75 | Bases: langchain.tools.base.BaseTool
Tool that can operate on any number of inputs.
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
func (Callable[[...], Any]) β
coroutine (Optional[Callable[[...], Awaitable[Any]]]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] [Required]ο
The input argumentsβ schema.
The tool schema.
attribute coroutine: Optional[Callable[[...], Awaitable[Any]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], Any] [Required]ο
The function to run when the tool is called.
classmethod from_function(func, name=None, description=None, return_direct=False, args_schema=None, infer_schema=True, **kwargs)[source]ο
Create tool from a given function.
A classmethod that helps to create a tool from a function.
Parameters
func (Callable) β The function from which to create a tool
name (Optional[str]) β The name of the tool. Defaults to the function name
description (Optional[str]) β The description of the tool. Defaults to the function docstring | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-76 | description (Optional[str]) β The description of the tool. Defaults to the function docstring
return_direct (bool) β Whether to return the result directly or as a callback
args_schema (Optional[Type[pydantic.main.BaseModel]]) β The schema of the toolβs input arguments
infer_schema (bool) β Whether to infer the schema from the functionβs signature
**kwargs β Additional arguments to pass to the tool
kwargs (Any) β
Returns
The tool
Return type
langchain.tools.base.StructuredTool
Examples
β¦ code-block:: python
def add(a: int, b: int) -> int:βββAdd two numbersβββ
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
property args: dictο
The toolβs input arguments.
class langchain.tools.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that takes in function or coroutine directly.
Parameters
name (str) β
func (Callable[[...], str]) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
coroutine (Optional[Callable[[...], Awaitable[str]]]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-77 | Return type
None
attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = Noneο
Pydantic model class to validate and parse the toolβs input arguments.
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
Deprecated. Please use callbacks instead.
attribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = Noneο
Callbacks to be called during tool execution.
attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], str] [Required]ο
The function to run when the tool is called.
attribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = Falseο
Handle the content of the ToolException thrown.
attribute name: str [Required]ο
The unique name of the tool that clearly communicates its purpose.
attribute return_direct: bool = Falseο
Whether to return the toolβs output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
attribute verbose: bool = Falseο
Whether to log the toolβs progress.
classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]ο
Initialize tool from a function.
Parameters
func (Callable) β
name (str) β
description (str) β
return_direct (bool) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-78 | args_schema (Optional[Type[pydantic.main.BaseModel]]) β
kwargs (Any) β
Return type
langchain.tools.base.Tool
property args: dictο
The toolβs input arguments.
class langchain.tools.VectorStoreQATool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]ο
Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool
Tool for the VectorDBQA chain. To be initialized with name and chain.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
static get_description(name, description)[source]ο
Parameters
name (str) β
description (str) β
Return type
str
class langchain.tools.VectorStoreQAWithSourcesTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]ο
Bases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-79 | Tool for the VectorDBQAWithSources chain.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
static get_description(name, description)[source]ο
Parameters
name (str) β
description (str) β
Return type
str
class langchain.tools.WikipediaQueryRun(*, name='Wikipedia', description='A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to search using the Wikipedia API.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-80 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.wikipedia.WikipediaAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]ο
class langchain.tools.WolframAlphaQueryRun(*, name='wolfram_alpha', description='A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that adds the capability to query using the Wolfram Alpha SDK.
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-81 | class langchain.tools.WriteFileTool(*, name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]ο
Bases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Type[pydantic.main.BaseModel]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
root_dir (Optional[str]) β
Return type
None
attribute args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'>ο
Pydantic model class to validate and parse the toolβs input arguments.
attribute description: str = 'Write file to disk'ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute name: str = 'write_file'ο
The unique name of the tool that clearly communicates its purpose. | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-82 | The unique name of the tool that clearly communicates its purpose.
class langchain.tools.YouTubeSearchTool(*, name='youtube_search', description='search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]ο
Bases: langchain.tools.base.BaseTool
Parameters
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-83 | Return type
None
class langchain.tools.ZapierNLAListActions(*, name='ZapierNLA_list_actions', description='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\'s exposed actions.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]ο
Bases: langchain.tools.base.BaseTool
Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed
actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{βidβ: str,
βdescriptionβ: str,
βparamsβ: Dict[str, str] | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-84 | βdescriptionβ: str,
βparamsβ: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/docs/using-the-api#ai-guessing)
Parameters
None β
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
Return type
None
attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-85 | class langchain.tools.ZapierNLARunAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, action_id, params=None, base_prompt='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description, params_schema=None)[source]ο
Bases: langchain.tools.base.BaseTool
Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-86 | tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id (str) β a specific action ID (from list actions) of the action to execute
(the set api_key must be associated with the action owner)
instructions β a natural language instruction string for using the action
(eg. βget the latest email from Mike Knoopβ for βGmail: find emailβ action)
params (Optional[dict]) β a dict, optional. Any params provided will override AI guesses
from instructions (see βunderstanding the AI guessing flowβ here:
https://nla.zapier.com/docs/using-the-api#ai-guessing)
name (str) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
api_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
base_prompt (str) β
zapier_description (str) β
params_schema (Dict[str, str]) β
Return type
None
attribute action_id: str [Required]ο
attribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]ο | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-87 | attribute base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'ο
attribute params: Optional[dict] = Noneο
attribute params_schema: Dict[str, str] [Optional]ο
attribute zapier_description: str [Required]ο
langchain.tools.format_tool_to_openai_function(tool)[source]ο
Format tool into the OpenAI function API.
Parameters
tool (langchain.tools.base.BaseTool) β
Return type
langchain.tools.convert_to_openai.FunctionDescription
langchain.tools.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]ο
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct (bool) β Whether to return directly from the tool rather | https://api.python.langchain.com/en/latest/modules/tools.html |
8e7952d3fdad-88 | return_direct (bool) β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β optional argument schema for user to specify
infer_schema (bool) β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
args (Union[str, Callable]) β
Return type
Callable
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return | https://api.python.langchain.com/en/latest/modules/tools.html |
afece4aea064-0 | Experimentalο
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous agentsο
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source]ο
Bases: langchain.chains.base.Chain, pydantic.main.BaseModel
Controller model for the BabyAGI agent.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
task_list (collections.deque) β
task_creation_chain (langchain.chains.base.Chain) β
task_prioritization_chain (langchain.chains.base.Chain) β
execution_chain (langchain.chains.base.Chain) β
task_id_counter (int) β
vectorstore (langchain.vectorstores.base.VectorStore) β
max_iterations (Optional[int]) β
Return type
None
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
property input_keys: List[str]ο
Input keys this chain expects.
property output_keys: List[str]ο
Output keys this chain expects.
get_next_task(result, task_description, objective)[source]ο
Get the next task.
Parameters
result (str) β
task_description (str) β | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-1 | Parameters
result (str) β
task_description (str) β
objective (str) β
Return type
List[Dict]
prioritize_tasks(this_task_id, objective)[source]ο
Prioritize tasks.
Parameters
this_task_id (int) β
objective (str) β
Return type
List[Dict]
execute_task(objective, task, k=5)[source]ο
Execute a task.
Parameters
objective (str) β
task (str) β
k (int) β
Return type
str
classmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source]ο
Initialize the BabyAGI Controller.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
verbose (bool) β
task_execution_chain (Optional[langchain.chains.base.Chain]) β
kwargs (Dict[str, Any]) β
Return type
langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI
class langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source]ο
Bases: object
Agent class for interacting with Auto-GPT.
Parameters
ai_name (str) β
memory (VectorStoreRetriever) β
chain (LLMChain) β
output_parser (BaseAutoGPTOutputParser) β
tools (List[BaseTool]) β
feedback_tool (Optional[HumanInputRun]) β
chat_history_memory (Optional[BaseChatMessageHistory]) β
Generative agentsο
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module. | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-2 | class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source]ο
Bases: pydantic.main.BaseModel
A character with memory and innate characteristics.
Parameters
name (str) β
age (Optional[int]) β
traits (str) β
status (str) β
memory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) β
llm (langchain.base_language.BaseLanguageModel) β
verbose (bool) β
summary (str) β
summary_refresh_seconds (int) β
last_refreshed (datetime.datetime) β
daily_summaries (List[str]) β
Return type
None
attribute name: str [Required]ο
The characterβs name.
attribute age: Optional[int] = Noneο
The optional age of the character.
attribute traits: str = 'N/A'ο
Permanent traits to ascribe to the character.
attribute status: str [Required]ο
The traits of the character you wish not to change.
attribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]ο
The memory object that combines relevance, recency, and βimportanceβ.
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The underlying language model.
attribute summary: str = ''ο
Stateful self-summary generated via reflection on the characterβs memory.
attribute summary_refresh_seconds: int = 3600ο
How frequently to re-generate the summary.
attribute last_refreshed: datetime.datetime [Optional]ο
The last time the characterβs summary was regenerated. | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-3 | The last time the characterβs summary was regenerated.
attribute daily_summaries: List[str] [Optional]ο
Summary of the events in the plan that the agent took.
model Config[source]ο
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = Trueο
summarize_related_memories(observation)[source]ο
Summarize memories that are most relevant to an observation.
Parameters
observation (str) β
Return type
str
generate_reaction(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
generate_dialogue_response(observation, now=None)[source]ο
React to a given observation.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
Tuple[bool, str]
get_summary(force_refresh=False, now=None)[source]ο
Return a descriptive summary of the agent.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str
get_full_header(force_refresh=False, now=None)[source]ο
Return a full header of the agentβs status, summary, and current time.
Parameters
force_refresh (bool) β
now (Optional[datetime.datetime]) β
Return type
str | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-4 | now (Optional[datetime.datetime]) β
Return type
str
class langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source]ο
Bases: langchain.schema.BaseMemory
Parameters
llm (langchain.base_language.BaseLanguageModel) β
memory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) β
verbose (bool) β
reflection_threshold (Optional[float]) β
current_plan (List[str]) β
importance_weight (float) β
aggregate_importance (float) β
max_tokens_limit (int) β
queries_key (str) β
most_recent_memories_token_key (str) β
add_memory_key (str) β
relevant_memories_key (str) β
relevant_memories_simple_key (str) β
most_recent_memories_key (str) β
now_key (str) β
reflecting (bool) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
The core language model.
attribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]ο
The retriever to fetch related memories.
attribute reflection_threshold: Optional[float] = Noneο | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-5 | attribute reflection_threshold: Optional[float] = Noneο
When aggregate_importance exceeds reflection_threshold, stop to reflect.
attribute current_plan: List[str] = []ο
The current plan of the agent.
attribute importance_weight: float = 0.15ο
How much weight to assign the memory importance.
attribute aggregate_importance: float = 0.0ο
Track the sum of the βimportanceβ of recent memories.
Triggers reflection when it reaches reflection_threshold.
pause_to_reflect(now=None)[source]ο
Reflect on recent observations and generate βinsightsβ.
Parameters
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memories(memory_content, now=None)[source]ο
Add an observations or memories to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
add_memory(memory_content, now=None)[source]ο
Add an observation or memory to the agentβs memory.
Parameters
memory_content (str) β
now (Optional[datetime.datetime]) β
Return type
List[str]
fetch_memories(observation, now=None)[source]ο
Fetch related memories.
Parameters
observation (str) β
now (Optional[datetime.datetime]) β
Return type
List[langchain.schema.Document]
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β | https://api.python.langchain.com/en/latest/modules/experimental.html |
afece4aea064-6 | Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, Any]) β
Return type
None
clear()[source]ο
Clear memory contents.
Return type
None | https://api.python.langchain.com/en/latest/modules/experimental.html |
81099a94f427-0 | Chainsο
Chains are easily reusable components which can be linked together.
class langchain.chains.APIChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_answer_chain, requests_wrapper, api_docs, question_key='question', output_key='output')[source]ο
Bases: langchain.chains.base.Chain
Chain that makes API calls and summarizes the responses to answer a question.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
api_request_chain (langchain.chains.llm.LLMChain) β
api_answer_chain (langchain.chains.llm.LLMChain) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
api_docs (str) β
question_key (str) β
output_key (str) β
Return type
None
attribute api_answer_chain: LLMChain [Required]ο
attribute api_docs: str [Required]ο
attribute api_request_chain: LLMChain [Required]ο
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute memory: Optional[BaseMemory] = Noneο | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-1 | for full details.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute requests_wrapper: TextRequestsWrapper [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-2 | use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-3 | Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_api_docs(llm, api_docs, headers=None, api_url_prompt=PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt=PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs)[source]ο
Load chain from just an LLM and the api docs.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
api_docs (str) β
headers (Optional[dict]) β
api_url_prompt (langchain.prompts.base.BasePromptTemplate) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-4 | api_url_prompt (langchain.prompts.base.BasePromptTemplate) β
api_response_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.api.base.APIChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-5 | constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.AnalyzeDocumentChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_document', text_splitter=None, combine_docs_chain)[source]ο
Bases: langchain.chains.base.Chain
Chain that splits documents, then analyzes it in pieces.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
text_splitter (langchain.text_splitter.TextSplitter) β
combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-6 | Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute text_splitter: langchain.text_splitter.TextSplitter [Optional]ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-7 | chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str] | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-8 | return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-9 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.ChatVectorDBChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, vectorstore, top_k_docs_for_context=4, search_kwargs=None)[source]ο
Bases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with a vector database.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
combine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
question_generator (langchain.chains.llm.LLMChain) β
output_key (str) β
return_source_documents (bool) β
return_generated_question (bool) β
get_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) β
vectorstore (langchain.vectorstores.base.VectorStore) β
top_k_docs_for_context (int) β
search_kwargs (dict) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain, | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-10 | Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute combine_docs_chain: BaseCombineDocumentsChain [Required]ο
attribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = Noneο
Return the source documents.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute output_key: str = 'answer'ο
attribute question_generator: LLMChain [Required]ο
attribute return_generated_question: bool = Falseο
attribute return_source_documents: bool = Falseο
attribute search_kwargs: dict [Optional]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute top_k_docs_for_context: int = 4ο
attribute vectorstore: VectorStore [Required]ο
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-11 | Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-12 | Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm(llm, vectorstore, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source]ο
Load chain from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
condense_question_prompt (langchain.prompts.base.BasePromptTemplate) β
chain_type (str) β
combine_docs_chain_kwargs (Optional[Dict]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-13 | Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Input keys.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.ConstitutionalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chain, constitutional_principles, critique_chain, revision_chain, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.base.Chain | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-14 | Bases: langchain.chains.base.Chain
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
chain (langchain.chains.llm.LLMChain) β
constitutional_principles (List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]) β
critique_chain (langchain.chains.llm.LLMChain) β
revision_chain (langchain.chains.llm.LLMChain) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-15 | Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute chain: langchain.chains.llm.LLMChain [Required]ο
attribute constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]ο
attribute critique_chain: langchain.chains.llm.LLMChain [Required]ο
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_intermediate_steps: bool = Falseο
attribute revision_chain: langchain.chains.llm.LLMChain [Required]ο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-16 | Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-17 | classmethod from_llm(llm, chain, critique_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-18 | are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-19 | Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-20 | Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-21 | toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-22 | harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-23 | I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and youβre unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-24 | Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-25 | Create a chain from an LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chain (langchain.chains.llm.LLMChain) β
critique_prompt (langchain.prompts.base.BasePromptTemplate) β
revision_prompt (langchain.prompts.base.BasePromptTemplate) β
kwargs (Any) β
Return type
langchain.chains.constitutional_ai.base.ConstitutionalChain
classmethod get_principles(names=None)[source]ο
Parameters
names (Optional[List[str]]) β
Return type
List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-26 | chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property input_keys: List[str]ο
Defines the input keys.
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property output_keys: List[str]ο
Defines the output keys.
class langchain.chains.ConversationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True), llm, output_key='response', output_parser=None, return_final_only=True, llm_kwargs=None, input_key='input')[source]ο
Bases: langchain.chains.llm.LLMChain | https://api.python.langchain.com/en/latest/modules/chains.html |