id
stringlengths
14
15
text
stringlengths
44
2.47k
source
stringlengths
61
181
1f9591eab472-0
langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun¶ class langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Async callback manager for retriever run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_retriever_end(documents, **kwargs) Run when retriever ends running. on_retriever_error(error, **kwargs) Run when retriever errors. on_retry(retry_state, **kwargs) Run on a retry event. on_text(text, **kwargs) Run when text is received.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html
1f9591eab472-1
on_text(text, **kwargs) Run when text is received. __init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶ Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. get_child(tag: Optional[str] = None) → AsyncCallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type AsyncCallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_retriever_end(documents: Sequence[Document], **kwargs: Any) → None[source]¶ Run when retriever ends running. async on_retriever_error(error: BaseException, **kwargs: Any) → None[source]¶ Run when retriever errors.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html
1f9591eab472-2
Run when retriever errors. async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶ Run on a retry event. async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any Examples using AsyncCallbackManagerForRetrieverRun¶ Retrieve as you generate with FLARE
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html
e350f288fc08-0
langchain.callbacks.file.FileCallbackHandler¶ class langchain.callbacks.file.FileCallbackHandler(filename: str, mode: str = 'a', color: Optional[str] = None)[source]¶ Callback Handler that writes to a file. Initialize callback handler. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. ignore_retry Whether to ignore retry callbacks. raise_error run_inline Methods __init__(filename[, mode, color]) Initialize callback handler. on_agent_action(action[, color]) Run on agent action. on_agent_finish(finish[, color]) Run on agent end. on_chain_end(outputs, **kwargs) Print out that we finished a chain. on_chain_error(error, *, run_id[, parent_run_id]) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Print out that we are entering a chain. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id[, parent_run_id]) Run when LLM ends running. on_llm_error(error, *, run_id[, parent_run_id]) Run when LLM errors. on_llm_new_token(token, *[, chunk, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html
e350f288fc08-1
Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_retry(retry_state, *, run_id[, parent_run_id]) Run on a retry event. on_text(text[, color, end]) Run when agent ends. on_tool_end(output[, color, ...]) If not the final action, print out observation. on_tool_error(error, *, run_id[, parent_run_id]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. __init__(filename: str, mode: str = 'a', color: Optional[str] = None) → None[source]¶ Initialize callback handler. on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Print out that we finished a chain. on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Print out that we are entering a chain.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html
e350f288fc08-2
Print out that we are entering a chain. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when LLM ends running. on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when LLM errors. on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on new LLM token. Only available when streaming is enabled. Parameters token (str) – The new token. chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk, information. (containing content and other) – on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html
e350f288fc08-3
Run when Retriever ends running. on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on a retry event. on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶ Run when agent ends. on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ If not the final action, print out observation. on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when tool starts running. Examples using FileCallbackHandler¶ Logging to file
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html
c73e90b53d06-0
langchain.callbacks.clearml_callback.ClearMLCallbackHandler¶ class langchain.callbacks.clearml_callback.ClearMLCallbackHandler(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]¶ Callback Handler that logs to ClearML. Parameters job_type (str) – The type of clearml task such as “inference”, “testing” or “qc” project_name (str) – The clearml project name tags (list) – Tags to add to the task task_name (str) – Name of the clearml task visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics stream_logs (bool) – Whether to stream callback actions to ClearML This handler will utilize the associated callback method and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to the ClearML console. Initialize callback handler. Attributes always_verbose Whether to call verbose callbacks even if verbose is False. ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. ignore_retry Whether to ignore retry callbacks. raise_error run_inline Methods __init__([task_type, project_name, tags, ...]) Initialize callback handler. analyze_text(text) Analyze text using textstat and spacy.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html
c73e90b53d06-1
analyze_text(text) Analyze text using textstat and spacy. flush_tracker([name, langchain_asset, finish]) Flush the tracker and setup the session. get_custom_callback_meta() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run when agent ends running. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_retry(retry_state, *, run_id[, parent_run_id]) Run on a retry event. on_text(text, **kwargs) Run when agent is ending. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html
c73e90b53d06-2
on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. reset_callback_meta() Reset the callback metadata. __init__(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False) → None[source]¶ Initialize callback handler. analyze_text(text: str) → dict[source]¶ Analyze text using textstat and spacy. Parameters text (str) – The text to analyze. Returns A dictionary containing the complexity metrics. Return type (dict) flush_tracker(name: Optional[str] = None, langchain_asset: Any = None, finish: bool = False) → None[source]¶ Flush the tracker and setup the session. Everything after this will be a new table. Parameters name – Name of the performed session so far so it is identifiable langchain_asset – The langchain asset to save. finish – Whether to finish the run. Returns – None get_custom_callback_meta() → Dict[str, Any]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run when agent ends running. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶ Run when chain errors.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html
c73e90b53d06-3
Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html
c73e90b53d06-4
Run when Retriever starts running. on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on a retry event. on_text(text: str, **kwargs: Any) → None[source]¶ Run when agent is ending. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. reset_callback_meta() → None¶ Reset the callback metadata. Examples using ClearMLCallbackHandler¶ ClearML
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.clearml_callback.ClearMLCallbackHandler.html
efb905858a94-0
langchain.callbacks.tracers.langchain.get_client¶ langchain.callbacks.tracers.langchain.get_client() → Client[source]¶ Get the client.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.get_client.html
9fb4fe829e6c-0
langchain.callbacks.manager.ParentRunManager¶ class langchain.callbacks.manager.ParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Sync Parent Run Manager. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_retry(retry_state, **kwargs) Run on a retry event. on_text(text, **kwargs) Run when text is received.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html
9fb4fe829e6c-1
on_text(text, **kwargs) Run when text is received. __init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶ Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. get_child(tag: Optional[str] = None) → CallbackManager[source]¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type CallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶ Run on a retry event. on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html
9fb4fe829e6c-2
Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html
2df61bc4c4ec-0
langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler¶ class langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶ Callback handler that returns an async iterator. Only the final output of the agent will be iterated. Instantiate AsyncFinalIteratorCallbackHandler. Parameters answer_prefix_tokens – Token sequence that prefixes the answer. Default is [“Final”, “Answer”, “:”] strip_tokens – Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached) stream_prefix – Should answer prefix itself also be streamed? Attributes always_verbose ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. ignore_retry Whether to ignore retry callbacks. raise_error run_inline Methods __init__(*[, answer_prefix_tokens, ...]) Instantiate AsyncFinalIteratorCallbackHandler. aiter() append_to_last_tokens(token) check_if_answer_reached() on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id[, ...]) Run when chain ends running. on_chain_error(error, *, run_id[, ...]) Run when chain errors. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
2df61bc4c4ec-1
Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run on retriever end. on_retriever_error(error, *, run_id[, ...]) Run on retriever error. on_retriever_start(serialized, query, *, run_id) Run on retriever start. on_retry(retry_state, *, run_id[, parent_run_id]) Run on a retry event. on_text(text, *, run_id[, parent_run_id, tags]) Run on arbitrary text. on_tool_end(output, *, run_id[, ...]) Run when tool ends running. on_tool_error(error, *, run_id[, ...]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. __init__(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False) → None[source]¶ Instantiate AsyncFinalIteratorCallbackHandler. Parameters answer_prefix_tokens – Token sequence that prefixes the answer. Default is [“Final”, “Answer”, “:”] strip_tokens – Ignore white spaces and new lines when comparing
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
2df61bc4c4ec-2
strip_tokens – Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached) stream_prefix – Should answer prefix itself also be streamed? async aiter() → AsyncIterator[str]¶ append_to_last_tokens(token: str) → None[source]¶ check_if_answer_reached() → bool[source]¶ async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent action. async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent end. async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain ends running. async on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain errors. async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when chain starts running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
2df61bc4c4ec-3
Run when chain starts running. async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. async on_llm_error(error: BaseException, **kwargs: Any) → None¶ Run when LLM errors. async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on retriever end. async on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on retriever error. async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run on retriever start.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
2df61bc4c4ec-4
Run on retriever start. async on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on a retry event. async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on arbitrary text. async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool ends running. async on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool errors. async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when tool starts running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
3331a608af0f-0
langchain.callbacks.tracers.wandb.RunProcessor¶ class langchain.callbacks.tracers.wandb.RunProcessor(wandb_module: Any, trace_module: Any)[source]¶ Handles the conversion of a LangChain Runs into a WBTraceTree. Methods __init__(wandb_module, trace_module) build_tree(runs) Builds a nested dictionary from a list of runs. :param runs: The list of runs to build the tree from. :return: The nested dictionary representing the langchain Run in a tree structure compatible with WBTraceTree. flatten_run(run) Utility to flatten a nest run object into a list of runs. modify_serialized_iterative(runs[, ...]) Utility to modify the serialized field of a list of runs dictionaries. process_model(run) Utility to process a run for wandb model_dict serialization. process_span(run) Converts a LangChain Run into a W&B Trace Span. truncate_run_iterative(runs[, keep_keys]) Utility to truncate a list of runs dictionaries to only keep the specified __init__(wandb_module: Any, trace_module: Any)[source]¶ build_tree(runs: List[Dict[str, Any]]) → Dict[str, Any][source]¶ Builds a nested dictionary from a list of runs. :param runs: The list of runs to build the tree from. :return: The nested dictionary representing the langchain Run in a tree structure compatible with WBTraceTree. flatten_run(run: Dict[str, Any]) → List[Dict[str, Any]][source]¶ Utility to flatten a nest run object into a list of runs. :param run: The base run to flatten. :return: The flattened list of runs.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.RunProcessor.html
3331a608af0f-1
:param run: The base run to flatten. :return: The flattened list of runs. modify_serialized_iterative(runs: List[Dict[str, Any]], exact_keys: Tuple[str, ...] = (), partial_keys: Tuple[str, ...] = ()) → List[Dict[str, Any]][source]¶ Utility to modify the serialized field of a list of runs dictionaries. removes any keys that match the exact_keys and any keys that contain any of the partial_keys. recursively moves the dictionaries under the kwargs key to the top level. changes the “id” field to a string “_kind” field that tells WBTraceTree how to visualize the run. promotes the “serialized” field to the top level. Parameters runs – The list of runs to modify. exact_keys – A tuple of keys to remove from the serialized field. partial_keys – A tuple of partial keys to remove from the serialized field. Returns The modified list of runs. process_model(run: Run) → Optional[Dict[str, Any]][source]¶ Utility to process a run for wandb model_dict serialization. :param run: The run to process. :return: The convert model_dict to pass to WBTraceTree. process_span(run: Run) → Optional['Span'][source]¶ Converts a LangChain Run into a W&B Trace Span. :param run: The LangChain Run to convert. :return: The converted W&B Trace Span. truncate_run_iterative(runs: List[Dict[str, Any]], keep_keys: Tuple[str, ...] = ()) → List[Dict[str, Any]][source]¶ Utility to truncate a list of runs dictionaries to only keep the specifiedkeys in each run. Parameters runs – The list of runs to truncate. keep_keys – The keys to keep in each run. Returns
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.RunProcessor.html
3331a608af0f-2
keep_keys – The keys to keep in each run. Returns The truncated list of runs.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.RunProcessor.html
f3f9058e6b3b-0
langchain.callbacks.utils.flatten_dict¶ langchain.callbacks.utils.flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '_') → Dict[str, Any][source]¶ Flattens a nested dictionary into a flat dictionary. Parameters nested_dict (dict) – The nested dictionary to flatten. parent_key (str) – The prefix to prepend to the keys of the flattened dict. sep (str) – The separator to use between the parent key and the key of the flattened dictionary. Returns A flat dictionary. Return type (dict)
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.flatten_dict.html
8bb81dbb0447-0
langchain.callbacks.base.BaseCallbackManager¶ class langchain.callbacks.base.BaseCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Base callback manager that handles callbacks from LangChain. Initialize callback manager. Attributes is_async Whether the callback manager is async. Methods __init__(handlers[, inheritable_handlers, ...]) Initialize callback manager. add_handler(handler[, inherit]) Add a handler to the callback manager. add_metadata(metadata[, inherit]) add_tags(tags[, inherit]) copy() Copy the callback manager. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_start(serialized, prompts, *, run_id) Run when LLM starts running. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. remove_handler(handler) Remove a handler from the callback manager. remove_metadata(keys) remove_tags(tags) set_handler(handler[, inherit]) Set handler as the only handler on the callback manager. set_handlers(handlers[, inherit]) Set handlers as the only handlers on the callback manager.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html
8bb81dbb0447-1
Set handlers as the only handlers on the callback manager. __init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None[source]¶ Initialize callback manager. add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None[source]¶ Add a handler to the callback manager. add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None[source]¶ add_tags(tags: List[str], inherit: bool = True) → None[source]¶ copy() → T[source]¶ Copy the callback manager. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html
8bb81dbb0447-2
Run when a chat model starts running. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when LLM starts running. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when tool starts running. remove_handler(handler: BaseCallbackHandler) → None[source]¶ Remove a handler from the callback manager. remove_metadata(keys: List[str]) → None[source]¶ remove_tags(tags: List[str]) → None[source]¶ set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None[source]¶ Set handler as the only handler on the callback manager. set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None[source]¶ Set handlers as the only handlers on the callback manager.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.base.BaseCallbackManager.html
08d326b04d5a-0
langchain.callbacks.manager.tracing_enabled¶ langchain.callbacks.manager.tracing_enabled(session_name: str = 'default') → Generator[TracerSessionV1, None, None][source]¶ Get the Deprecated LangChainTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to “default”. Returns The LangChainTracer session. Return type TracerSessionV1 Example >>> with tracing_enabled() as session: ... # Use the LangChainTracer session Examples using tracing_enabled¶ Multiple callback handlers
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.tracing_enabled.html
65e6177b0abe-0
langchain.callbacks.whylabs_callback.import_langkit¶ langchain.callbacks.whylabs_callback.import_langkit(sentiment: bool = False, toxicity: bool = False, themes: bool = False) → Any[source]¶ Import the langkit python package and raise an error if it is not installed. Parameters sentiment – Whether to import the langkit.sentiment module. Defaults to False. toxicity – Whether to import the langkit.toxicity module. Defaults to False. themes – Whether to import the langkit.themes module. Defaults to False. Returns The imported langkit module.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.whylabs_callback.import_langkit.html
71b9ac55b6f9-0
langchain.callbacks.sagemaker_callback.save_json¶ langchain.callbacks.sagemaker_callback.save_json(data: dict, file_path: str) → None[source]¶ Save dict to local file path. Parameters data (dict) – The dictionary to be saved. file_path (str) – Local file path.
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.sagemaker_callback.save_json.html
028fab57b375-0
langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler¶ class langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler[source]¶ Attributes intent_callback pii_callback toxicity_callback Methods __init__() on_after_intent(moderation_beacon, ...) Run after Toxicity validation is complete. on_after_pii(moderation_beacon, unique_id, ...) Run after PII validation is complete. on_after_toxicity(moderation_beacon, ...) Run after Toxicity validation is complete. __init__() → None[source]¶ async on_after_intent(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶ Run after Toxicity validation is complete. async on_after_pii(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶ Run after PII validation is complete. async on_after_toxicity(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶ Run after Toxicity validation is complete.
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler.html
3250f36094c8-0
langchain_experimental.comprehend_moderation.base_moderation.BaseModeration¶ class langchain_experimental.comprehend_moderation.base_moderation.BaseModeration(client: Any, config: Optional[Any] = None, moderation_callback: Optional[Any] = None, unique_id: Optional[str] = None, run_manager: Optional[CallbackManagerForChainRun] = None)[source]¶ Methods __init__(client[, config, ...]) moderate(prompt) __init__(client: Any, config: Optional[Any] = None, moderation_callback: Optional[Any] = None, unique_id: Optional[str] = None, run_manager: Optional[CallbackManagerForChainRun] = None)[source]¶ moderate(prompt: Any) → str[source]¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation.BaseModeration.html
ae7b21dd152d-0
langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError¶ class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError(message: str = 'The prompt contains toxic content and cannot be processed')[source]¶ Exception raised if Toxic entities are detected. message -- explanation of the error
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError.html
99ec51721024-0
langchain_experimental.comprehend_moderation.intent.ComprehendIntent¶ class langchain_experimental.comprehend_moderation.intent.ComprehendIntent(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶ Methods __init__(client[, callback, unique_id, chain_id]) validate(prompt_value[, config]) Check and validate the intent of the given prompt text. __init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶ validate(prompt_value: str, config: Any = None) → str[source]¶ Check and validate the intent of the given prompt text. Parameters prompt_value (str) – The input text to be checked for unintended intent. config (Dict[str, Any]) – Configuration settings for intent checks. Raises ValueError – If unintended intent is found in the prompt text based on the specified threshold. – Returns The input prompt_value. Return type str Note This function checks the intent of the provided prompt text using Comprehend’s classify_document API and raises an error if unintended intent is detected with a score above the specified threshold. Example comprehend_client = boto3.client(‘comprehend’) prompt_text = “Please tell me your credit card information.” config = {“threshold”: 0.7} checked_prompt = check_intent(comprehend_client, prompt_text, config)
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.intent.ComprehendIntent.html
4d942551787c-0
langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig¶ class langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param filters: List[Union[langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig, langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig, langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig]] = [ModerationPiiConfig(threshold=0.5, labels=[], redact=False, mask_character='*'), ModerationToxicityConfig(threshold=0.5, labels=[]), ModerationIntentConfig(threshold=0.5)]¶ Filters applied to the moderation chain, defaults to [ModerationPiiConfig(), ModerationToxicityConfig(), ModerationIntentConfig()] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html
4d942551787c-1
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html
4d942551787c-2
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html
cba3a52d83bf-0
langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationIntentionError¶ class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationIntentionError(message: str = 'The prompt indicates an un-desired intent and cannot be processed')[source]¶ Exception raised if Intention entities are detected. message -- explanation of the error
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationIntentionError.html
22a9575fcf23-0
langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError¶ class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError(message: str = 'The prompt contains PII entities and cannot be processed')[source]¶ Exception raised if PII entities are detected. message -- explanation of the error
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError.html
346061274e6c-0
langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig¶ class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param threshold: float = 0.5¶ Threshold for Intent classification confidence score, defaults to 0.5 i.e. 50% classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig.html
346061274e6c-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig.html
346061274e6c-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationIntentConfig.html
eb8b0078d4e7-0
langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain¶ class langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain[source]¶ Bases: Chain A subclass of Chain, designed to apply moderation to LLMs. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param client: Optional[Any] = None¶ boto3 client object for connection to Amazon Comprehend param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param input_key: str = 'input'¶ Key used to fetch/store the input in data containers. Defaults to input param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-1
and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param moderation_callback: Optional[langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler] = None¶ Callback handler for moderation, this is different from regular callbacks which can be used in addition to this. param moderation_config: langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig = BaseModerationConfig(filters=[ModerationPiiConfig(threshold=0.5, labels=[], redact=False, mask_character='*'), ModerationToxicityConfig(threshold=0.5, labels=[]), ModerationIntentConfig(threshold=0.5)])¶ Configuration settings for moderation, defaults to BaseModerationConfig with default values param output_key: str = 'output'¶ Key used to fetch/store the output in data containers. Defaults to output param region_name: Optional[str] = None¶ The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks.
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-2
and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param unique_id: Optional[str] = None¶ A unique id that can be used to identify or group a user or session param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-3
metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-4
addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-5
addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-6
jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-7
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-8
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs.
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-9
memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..."
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-10
context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-11
Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_keys: List[str]¶ Returns a list of input keys expected by the prompt. This method defines the input keys that the prompt expects in order to perform its processing. It ensures that the specified keys are available for providing input to the prompt. Returns A list of input keys. Return type List[str] Note This method is considered private and may not be intended for direct external use. property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶ Returns a list of output keys. This method defines the output keys that will be used to access the output
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
eb8b0078d4e7-12
This method defines the output keys that will be used to access the output values produced by the chain or function. It ensures that the specified keys are available to access the outputs. Returns A list of output keys. Return type List[str] Note This method is considered private and may not be intended for direct external use. property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html
a71b147055d1-0
langchain_experimental.comprehend_moderation.pii.ComprehendPII¶ class langchain_experimental.comprehend_moderation.pii.ComprehendPII(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶ Methods __init__(client[, callback, unique_id, chain_id]) validate(prompt_value[, config]) __init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶ validate(prompt_value: str, config: Any = None) → str[source]¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.pii.ComprehendPII.html
33b5a6bf1b32-0
langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig¶ class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param labels: List[str] = []¶ List of PII Universal Labels. Defaults to list[] param mask_character: str = '*'¶ Redaction mask character in case redact=True, defaults to asterisk (*) param redact: bool = False¶ Whether to perform redaction of detected PII entities param threshold: float = 0.5¶ Threshold for PII confidence score, defaults to 0.5 i.e. 50% classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html
33b5a6bf1b32-1
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html
33b5a6bf1b32-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html
1829d2828ce0-0
langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity¶ class langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶ Methods __init__(client[, callback, unique_id, chain_id]) validate(prompt_value[, config]) Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration. __init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶ validate(prompt_value: str, config: Any = None) → str[source]¶ Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration. :param prompt_value: The text content to be checked for toxicity. :type prompt_value: str :param config: Configuration for toxicity checks and actions. :type config: Dict[str, Any] Returns The original prompt_value if allowed or no toxicity found. Return type str Raises ValueError – If the prompt contains toxic labels and cannot be processed based on the configuration. –
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity.html
28346b7e22ea-0
langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig¶ class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param labels: List[str] = []¶ List of toxic labels, defaults to list[] param threshold: float = 0.5¶ Threshold for Toxic label confidence score, defaults to 0.5 i.e. 50% classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html
28346b7e22ea-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html
28346b7e22ea-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html
bfab539cecac-0
langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator¶ class langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator[source]¶ Bases: BaseModel Generates synthetic data using the given LLM and few-shot template. Utilizes the provided LLM to produce synthetic data based on the few-shot prompt template. template¶ Template for few-shot prompting. Type FewShotPromptTemplate llm¶ Large Language Model to use for generation. Type Optional[BaseLanguageModel] llm_chain¶ LLM chain with the LLM and few-shot template. Type Optional[Chain] example_input_key¶ Key to use for storing example inputs. Type str Usage Example:>>> template = FewShotPromptTemplate(...) >>> llm = BaseLanguageModel(...) >>> generator = SyntheticDataGenerator(template=template, llm=llm) >>> results = generator.generate(subject="climate change", runs=5) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_input_key: str = 'example'¶ param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶ param llm_chain: Optional[langchain.chains.base.Chain] = None¶ param results: list = []¶ param template: langchain.prompts.few_shot.FewShotPromptTemplate [Required]¶ async agenerate(subject: str, runs: int, extra: str = '', *args: Any, **kwargs: Any) → List[str][source]¶ Generate synthetic data using the given subject asynchronously. Note: Since the LLM calls run concurrently, you may have fewer duplicates by adding specific instructions to the “extra” keyword argument. Parameters
https://api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html
bfab539cecac-1
the “extra” keyword argument. Parameters subject (str) – The subject the synthetic data will be about. runs (int) – Number of times to generate the data asynchronously. extra (str) – Extra instructions for steerability in data generation. Returns List of generated synthetic data for the given subject. Return type List[str] Usage Example:>>> results = await generator.agenerate(subject="climate change", runs=5, extra="Focus on env impacts.") classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html
bfab539cecac-2
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ generate(subject: str, runs: int, *args: Any, **kwargs: Any) → List[str][source]¶ Generate synthetic data using the given subject string. Parameters subject (str) – The subject the synthetic data will be about. runs (int) – Number of times to generate the data. extra (str) – Extra instructions for steerability in data generation. Returns List of generated synthetic data. Return type List[str] Usage Example:>>> results = generator.generate(subject="climate change", runs=5, extra="Focus on environmental impacts.") json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html
bfab539cecac-3
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.base.SyntheticDataGenerator.html
3cb14f662297-0
langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator¶ langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: ChatOpenAI, prompt: BasePromptTemplate, output_parser: Optional[BaseLLMOutputParser] = None, **kwargs: Any) → SyntheticDataGenerator[source]¶ Create an instance of SyntheticDataGenerator tailored for OpenAI models. This function creates an LLM chain designed for structured output based on the provided schema, language model, and prompt template. The resulting chain is then used to instantiate and return a SyntheticDataGenerator. Parameters output_schema (Union[Dict[str, Any], Type[BaseModel]]) – Schema for expected a (output. This can be either a dictionary representing a valid JsonSchema or) – class. (Pydantic BaseModel) – llm (ChatOpenAI) – OpenAI language model to use. prompt (BasePromptTemplate) – Template to be used for generating prompts. output_parser (Optional[BaseLLMOutputParser], optional) – Parser for provided (processing model outputs. If none is) – inferred (a default will be) – types. (from the function) – **kwargs – Additional keyword arguments to be passed to create_structured_output_chain. – Returns: SyntheticDataGenerator: An instance of the data generator set up with the constructed chain. Usage:To generate synthetic data with a structured output, first define your desired output schema. Then, use this function to create a SyntheticDataGenerator instance. After obtaining the generator, you can utilize its methods to produce the desired synthetic data.
https://api.python.langchain.com/en/latest/tabular_synthetic_data/langchain_experimental.tabular_synthetic_data.openai.create_openai_data_generator.html
92f96472c3cf-0
langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute¶ class langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute[source]¶ Bases: Chain Plan and execute a chain of steps. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param executor: langchain_experimental.plan_and_execute.executors.base.BaseExecutor [Required]¶ The executor to use. param input_key: str = 'input'¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_key: str = 'output'¶ param planner: langchain_experimental.plan_and_execute.planners.base.BasePlanner [Required]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-1
The planner to use. param step_container: langchain_experimental.plan_and_execute.schema.BaseStepContainer [Optional]¶ The step container to use. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-2
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-3
addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-4
addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system.
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-5
Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-6
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-7
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-8
Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects.
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-9
these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-10
Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Type[langchain.schema.runnable.utils.Input]¶ property OutputType: Type[langchain.schema.runnable.utils.Output]¶ property input_keys: List[str]¶ Keys expected to be in the chain input. property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
92f96472c3cf-11
property output_keys: List[str]¶ Keys expected to be in the chain output. property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html
c48407917919-0
langchain_experimental.plan_and_execute.schema.StepResponse¶ class langchain_experimental.plan_and_execute.schema.StepResponse[source]¶ Bases: BaseModel Step response. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param response: str [Required]¶ The response. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.StepResponse.html
c48407917919-1
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.StepResponse.html
ad98efda0fe5-0
langchain_experimental.plan_and_execute.executors.base.BaseExecutor¶ class langchain_experimental.plan_and_execute.executors.base.BaseExecutor[source]¶ Bases: BaseModel Base executor. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶ Take async step. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.BaseExecutor.html
ad98efda0fe5-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.BaseExecutor.html
ad98efda0fe5-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ abstract step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶ Take step. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.BaseExecutor.html
102bd079ce7f-0
langchain_experimental.plan_and_execute.executors.base.ChainExecutor¶ class langchain_experimental.plan_and_execute.executors.base.ChainExecutor[source]¶ Bases: BaseExecutor Chain executor. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param chain: langchain.chains.base.Chain [Required]¶ The chain to use. async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶ Take step. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.ChainExecutor.html
102bd079ce7f-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.ChainExecutor.html
102bd079ce7f-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶ Take step. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.ChainExecutor.html
b39ce281bd10-0
langchain_experimental.plan_and_execute.schema.Plan¶ class langchain_experimental.plan_and_execute.schema.Plan[source]¶ Bases: BaseModel Plan. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param steps: List[langchain_experimental.plan_and_execute.schema.Step] [Required]¶ The steps. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.Plan.html
b39ce281bd10-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.Plan.html
b39ce281bd10-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.Plan.html
ceddc538c176-0
langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser¶ class langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser[source]¶ Bases: PlanOutputParser Planning output parser. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently. async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.config.RunnableConfig | None = None, **kwargs: Optional[Any]) → T¶ Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously. async aparse(text: str) → T¶ Parse a single string model output into some structure. Parameters text – String output of a language model. Returns Structured output. async aparse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
ceddc538c176-1
Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → AsyncIterator[RunLogPatch]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
ceddc538c176-2
Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_orm(obj: Any) → Model¶ get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶ classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
ceddc538c176-3
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. parse(text: str) → Plan[source]¶ Parse into a plan. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ parse_result(result: List[Generation], *, partial: bool = False) → T¶ Parse a list of candidate model Generations into a specific format.
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
ceddc538c176-4
Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of a language model. prompt – Input PromptValue. Returns Structured output classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
ceddc538c176-5
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ property InputType: Any¶ property OutputType: type[T]¶ property input_schema: Type[pydantic.main.BaseModel]¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser.html
04cd467b9802-0
langchain_experimental.plan_and_execute.schema.ListStepContainer¶ class langchain_experimental.plan_and_execute.schema.ListStepContainer[source]¶ Bases: BaseStepContainer List step container. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param steps: List[Tuple[langchain_experimental.plan_and_execute.schema.Step, langchain_experimental.plan_and_execute.schema.StepResponse]] [Optional]¶ The steps. add_step(step: Step, step_response: StepResponse) → None[source]¶ Add step and step response to the container. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.ListStepContainer.html
04cd467b9802-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ get_final_response() → str[source]¶ Return the final response based on steps taken. get_steps() → List[Tuple[Step, StepResponse]][source]¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.ListStepContainer.html