uuid
stringlengths
36
36
import_dependencies
list
path
stringlengths
21
94
type
stringclasses
2 values
link
stringlengths
98
248
source_code
stringlengths
38
36.6k
doc_content
stringlengths
0
27.2k
description
stringlengths
0
885
8b302b43-dc4e-4c26-bf3b-79ee8c511cb4
[ "json", "tempfile", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "wandb" ]
langchain.callbacks.wandb_callback.analyze_text
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.analyze_text.html#langchain.callbacks.wandb_callback.analyze_text
def analyze_text( text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Any = None, output_dir: Optional[Union[str, Path]] = None, ) -> dict: """Analyze text using textstat and spacy. Parameters: text (str): The text to analyze. complexity_metrics (bool): Whether to compute complexity metrics. visualize (bool): Whether to visualize the text. nlp (spacy.lang): The spacy language model to use for visualization. output_dir (str): The directory to save the visualization files to. Returns: (dict): A dictionary containing the complexity metrics and visualization files serialized in a wandb.Html element. """ resp = {} textstat = import_textstat() wandb = import_wandb() spacy = import_spacy() if complexity_metrics: text_complexity_metrics = { "flesch_reading_ease": textstat.flesch_reading_ease(text), "flesch_kincaid_grade": textstat.flesch_kincaid_grade(text), "smog_index": textstat.smog_index(text), "coleman_liau_index": textstat.coleman_liau_index(text), "automated_readability_index": textstat.automated_readability_index(text), "dale_chall_readability_score": textstat.dale_chall_readability_score(text), "difficult_words": textstat.difficult_words(text), "linsear_write_formula": textstat.linsear_write_formula(text), "gunning_fog": textstat.gunning_fog(text), "text_standard": textstat.text_standard(text), "fernandez_huerta": textstat.fernandez_huerta(text), "szigriszt_pazos": textstat.szigriszt_pazos(text), "gutierrez_polini": textstat.gutierrez_polini(text), "crawford": textstat.crawford(text), "gulpease_index": textstat.gulpease_index(text), "osman": textstat.osman(text), } resp.update(text_complexity_metrics) if visualize and nlp and output_dir is not None: doc = nlp(text) dep_out = spacy.displacy.render( # type: ignore doc, style="dep", jupyter=False, page=True ) dep_output_path = Path(output_dir, hash_string(f"dep-{text}") + ".html") dep_output_path.open("w", encoding="utf-8").write(dep_out) ent_out = spacy.displacy.render( # type: ignore doc, style="ent", jupyter=False, page=True ) ent_output_path = Path(output_dir, hash_string(f"ent-{text}") + ".html") ent_output_path.open("w", encoding="utf-8").write(ent_out) text_visualizations = { "dependency_tree": wandb.Html(str(dep_output_path)), "entities": wandb.Html(str(ent_output_path)), } resp.update(text_visualizations) return resp
langchain.callbacks.wandb_callback.analyze_text¶ langchain.callbacks.wandb_callback.analyze_text(text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Any = None, output_dir: Optional[Union[str, Path]] = None) → dict[source]¶ Analyze text using textstat and spacy. Parameters text (str) – The text to analyze. complexity_metrics (bool) – Whether to compute complexity metrics. visualize (bool) – Whether to visualize the text. nlp (spacy.lang) – The spacy language model to use for visualization. output_dir (str) – The directory to save the visualization files to. Returns A dictionary containing the complexity metrics and visualizationfiles serialized in a wandb.Html element. Return type (dict)
Analyze text using textstat and spacy.
8478358d-5a90-4546-8d67-7d681bc29da5
[ "json", "tempfile", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "wandb" ]
langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation.html#langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation
def construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any: """Construct an html element from a prompt and a generation. Parameters: prompt (str): The prompt. generation (str): The generation. Returns: (wandb.Html): The html element.""" wandb = import_wandb() formatted_prompt = prompt.replace("\n", "<br>") formatted_generation = generation.replace("\n", "<br>") return wandb.Html( f""" <p style="color:black;">{formatted_prompt}:</p> <blockquote> <p style="color:green;"> {formatted_generation} </p> </blockquote> """, inject=False, )
langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation¶ langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) → Any[source]¶ Construct an html element from a prompt and a generation. Parameters prompt (str) – The prompt. generation (str) – The generation. Returns The html element. Return type (wandb.Html)
Construct an html element from a prompt and a generation.
76b9401f-1de9-47b5-9594-293506748de9
[ "json", "tempfile", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "wandb" ]
langchain.callbacks.wandb_callback.WandbCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html#langchain.callbacks.wandb_callback.WandbCallbackHandler
class WandbCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler): """Callback Handler that logs to Weights and Biases. Parameters: job_type (str): The type of job. project (str): The project to log to. entity (str): The entity to log to. tags (list): The tags to log. group (str): The group to log to. name (str): The name of the run. notes (str): The notes to log. visualize (bool): Whether to visualize the run. complexity_metrics (bool): Whether to log complexity metrics. stream_logs (bool): Whether to stream callback actions to W&B This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response using the run.log() method to Weights and Biases. """ def __init__( self, job_type: Optional[str] = None, project: Optional[str] = "langchain_callback_demo", entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False, ) -> None: """Initialize callback handler.""" wandb = import_wandb() import_pandas() import_textstat() spacy = import_spacy() super().__init__() self.job_type = job_type self.project = project self.entity = entity self.tags = tags self.group = group self.name = name self.notes = notes self.visualize = visualize self.complexity_metrics = complexity_metrics self.stream_logs = stream_logs self.temp_dir = tempfile.TemporaryDirectory() self.run: wandb.sdk.wandb_run.Run = wandb.init( # type: ignore job_type=self.job_type, project=self.project, entity=self.entity, tags=self.tags, group=self.group, name=self.name, notes=self.notes, ) warning = ( "DEPRECATION: The `WandbCallbackHandler` will soon be deprecated in favor " "of the `WandbTracer`. Please update your code to use the `WandbTracer` " "instead." ) wandb.termwarn( warning, repeat=False, ) self.callback_columns: list = [] self.action_records: list = [] self.complexity_metrics = complexity_metrics self.visualize = visualize self.nlp = spacy.load("en_core_web_sm") def _init_resp(self) -> Dict: return {k: None for k in self.callback_columns} def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: """Run when LLM starts.""" self.step += 1 self.llm_starts += 1 self.starts += 1 resp = self._init_resp() resp.update({"action": "on_llm_start"}) resp.update(flatten_dict(serialized)) resp.update(self.get_custom_callback_meta()) for prompt in prompts: prompt_resp = deepcopy(resp) prompt_resp["prompts"] = prompt self.on_llm_start_records.append(prompt_resp) self.action_records.append(prompt_resp) if self.stream_logs: self.run.log(prompt_resp) def on_llm_new_token(self, token: str, **kwargs: Any) -> None: """Run when LLM generates a new token.""" self.step += 1 self.llm_streams += 1 resp = self._init_resp() resp.update({"action": "on_llm_new_token", "token": token}) resp.update(self.get_custom_callback_meta()) self.on_llm_token_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when LLM ends running.""" self.step += 1 self.llm_ends += 1 self.ends += 1 resp = self._init_resp() resp.update({"action": "on_llm_end"}) resp.update(flatten_dict(response.llm_output or {})) resp.update(self.get_custom_callback_meta()) for generations in response.generations: for generation in generations: generation_resp = deepcopy(resp) generation_resp.update(flatten_dict(generation.dict())) generation_resp.update( analyze_text( generation.text, complexity_metrics=self.complexity_metrics, visualize=self.visualize, nlp=self.nlp, output_dir=self.temp_dir.name, ) ) self.on_llm_end_records.append(generation_resp) self.action_records.append(generation_resp) if self.stream_logs: self.run.log(generation_resp) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when LLM errors.""" self.step += 1 self.errors += 1 def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: """Run when chain starts running.""" self.step += 1 self.chain_starts += 1 self.starts += 1 resp = self._init_resp() resp.update({"action": "on_chain_start"}) resp.update(flatten_dict(serialized)) resp.update(self.get_custom_callback_meta()) chain_input = inputs["input"] if isinstance(chain_input, str): input_resp = deepcopy(resp) input_resp["input"] = chain_input self.on_chain_start_records.append(input_resp) self.action_records.append(input_resp) if self.stream_logs: self.run.log(input_resp) elif isinstance(chain_input, list): for inp in chain_input: input_resp = deepcopy(resp) input_resp.update(inp) self.on_chain_start_records.append(input_resp) self.action_records.append(input_resp) if self.stream_logs: self.run.log(input_resp) else: raise ValueError("Unexpected data format provided!") def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Run when chain ends running.""" self.step += 1 self.chain_ends += 1 self.ends += 1 resp = self._init_resp() resp.update({"action": "on_chain_end", "outputs": outputs["output"]}) resp.update(self.get_custom_callback_meta()) self.on_chain_end_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when chain errors.""" self.step += 1 self.errors += 1 def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> None: """Run when tool starts running.""" self.step += 1 self.tool_starts += 1 self.starts += 1 resp = self._init_resp() resp.update({"action": "on_tool_start", "input_str": input_str}) resp.update(flatten_dict(serialized)) resp.update(self.get_custom_callback_meta()) self.on_tool_start_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_tool_end(self, output: str, **kwargs: Any) -> None: """Run when tool ends running.""" self.step += 1 self.tool_ends += 1 self.ends += 1 resp = self._init_resp() resp.update({"action": "on_tool_end", "output": output}) resp.update(self.get_custom_callback_meta()) self.on_tool_end_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when tool errors.""" self.step += 1 self.errors += 1 def on_text(self, text: str, **kwargs: Any) -> None: """ Run when agent is ending. """ self.step += 1 self.text_ctr += 1 resp = self._init_resp() resp.update({"action": "on_text", "text": text}) resp.update(self.get_custom_callback_meta()) self.on_text_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None: """Run when agent ends running.""" self.step += 1 self.agent_ends += 1 self.ends += 1 resp = self._init_resp() resp.update( { "action": "on_agent_finish", "output": finish.return_values["output"], "log": finish.log, } ) resp.update(self.get_custom_callback_meta()) self.on_agent_finish_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run on agent action.""" self.step += 1 self.tool_starts += 1 self.starts += 1 resp = self._init_resp() resp.update( { "action": "on_agent_action", "tool": action.tool, "tool_input": action.tool_input, "log": action.log, } ) resp.update(self.get_custom_callback_meta()) self.on_agent_action_records.append(resp) self.action_records.append(resp) if self.stream_logs: self.run.log(resp) def _create_session_analysis_df(self) -> Any: """Create a dataframe with all the information from the session.""" pd = import_pandas() on_llm_start_records_df = pd.DataFrame(self.on_llm_start_records) on_llm_end_records_df = pd.DataFrame(self.on_llm_end_records) llm_input_prompts_df = ( on_llm_start_records_df[["step", "prompts", "name"]] .dropna(axis=1) .rename({"step": "prompt_step"}, axis=1) ) complexity_metrics_columns = [] visualizations_columns = [] if self.complexity_metrics: complexity_metrics_columns = [ "flesch_reading_ease", "flesch_kincaid_grade", "smog_index", "coleman_liau_index", "automated_readability_index", "dale_chall_readability_score", "difficult_words", "linsear_write_formula", "gunning_fog", "text_standard", "fernandez_huerta", "szigriszt_pazos", "gutierrez_polini", "crawford", "gulpease_index", "osman", ] if self.visualize: visualizations_columns = ["dependency_tree", "entities"] llm_outputs_df = ( on_llm_end_records_df[ [ "step", "text", "token_usage_total_tokens", "token_usage_prompt_tokens", "token_usage_completion_tokens", ] + complexity_metrics_columns + visualizations_columns ] .dropna(axis=1) .rename({"step": "output_step", "text": "output"}, axis=1) ) session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1) session_analysis_df["chat_html"] = session_analysis_df[ ["prompts", "output"] ].apply( lambda row: construct_html_from_prompt_and_generation( row["prompts"], row["output"] ), axis=1, ) return session_analysis_df def flush_tracker( self, langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: Optional[str] = None, project: Optional[str] = None, entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: Optional[bool] = None, complexity_metrics: Optional[bool] = None, ) -> None: """Flush the tracker and reset the session. Args: langchain_asset: The langchain asset to save. reset: Whether to reset the session. finish: Whether to finish the run. job_type: The job type. project: The project. entity: The entity. tags: The tags. group: The group. name: The name. notes: The notes. visualize: Whether to visualize. complexity_metrics: Whether to compute complexity metrics. Returns: None """ pd = import_pandas() wandb = import_wandb() action_records_table = wandb.Table(dataframe=pd.DataFrame(self.action_records)) session_analysis_table = wandb.Table( dataframe=self._create_session_analysis_df() ) self.run.log( { "action_records": action_records_table, "session_analysis": session_analysis_table, } ) if langchain_asset: langchain_asset_path = Path(self.temp_dir.name, "model.json") model_artifact = wandb.Artifact(name="model", type="model") model_artifact.add(action_records_table, name="action_records") model_artifact.add(session_analysis_table, name="session_analysis") try: langchain_asset.save(langchain_asset_path) model_artifact.add_file(str(langchain_asset_path)) model_artifact.metadata = load_json_to_dict(langchain_asset_path) except ValueError: langchain_asset.save_agent(langchain_asset_path) model_artifact.add_file(str(langchain_asset_path)) model_artifact.metadata = load_json_to_dict(langchain_asset_path) except NotImplementedError as e: print("Could not save model.") print(repr(e)) pass self.run.log_artifact(model_artifact) if finish or reset: self.run.finish() self.temp_dir.cleanup() self.reset_callback_meta() if reset: self.__init__( # type: ignore job_type=job_type if job_type else self.job_type, project=project if project else self.project, entity=entity if entity else self.entity, tags=tags if tags else self.tags, group=group if group else self.group, name=name if name else self.name, notes=notes if notes else self.notes, visualize=visualize if visualize else self.visualize, complexity_metrics=complexity_metrics if complexity_metrics else self.complexity_metrics, )
langchain.callbacks.wandb_callback.WandbCallbackHandler¶ class langchain.callbacks.wandb_callback.WandbCallbackHandler(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]¶ Bases: BaseMetadataCallbackHandler, BaseCallbackHandler Callback Handler that logs to Weights and Biases. Parameters job_type (str) – The type of job. project (str) – The project to log to. entity (str) – The entity to log to. tags (list) – The tags to log. group (str) – The group to log to. name (str) – The name of the run. notes (str) – The notes to log. visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics. stream_logs (bool) – Whether to stream callback actions to W&B This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response using the run.log() method to Weights and Biases. Initialize callback handler. Methods __init__([job_type, project, entity, tags, ...]) Initialize callback handler. flush_tracker([langchain_asset, reset, ...]) Flush the tracker and reset the session. get_custom_callback_meta() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run when agent ends running. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run when agent is ending. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. reset_callback_meta() Reset the callback metadata. Attributes always_verbose Whether to call verbose callbacks even if verbose is False. ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline flush_tracker(langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: Optional[str] = None, project: Optional[str] = None, entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: Optional[bool] = None, complexity_metrics: Optional[bool] = None) → None[source]¶ Flush the tracker and reset the session. Parameters langchain_asset – The langchain asset to save. reset – Whether to reset the session. finish – Whether to finish the run. job_type – The job type. project – The project. entity – The entity. tags – The tags. group – The group. name – The name. notes – The notes. visualize – Whether to visualize. complexity_metrics – Whether to compute complexity metrics. Returns – None get_custom_callback_meta() → Dict[str, Any]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run when agent ends running. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run when agent is ending. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. reset_callback_meta() → None¶ Reset the callback metadata. property always_verbose: bool¶ Whether to call verbose callbacks even if verbose is False. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
Callback Handler that logs to Weights and Biases.
4d010d6e-7acc-4800-aa29-63a97ce3ccf6
[ "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.import_pandas", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult" ]
langchain.callbacks.arize_callback.ArizeCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.arize_callback.ArizeCallbackHandler.html#langchain.callbacks.arize_callback.ArizeCallbackHandler
class ArizeCallbackHandler(BaseCallbackHandler): """Callback Handler that logs to Arize.""" def __init__( self, model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None, ) -> None: """Initialize callback handler.""" super().__init__() self.model_id = model_id self.model_version = model_version self.space_key = SPACE_KEY self.api_key = API_KEY self.prompt_records: List[str] = [] self.response_records: List[str] = [] self.prediction_ids: List[str] = [] self.pred_timestamps: List[int] = [] self.response_embeddings: List[float] = [] self.prompt_embeddings: List[float] = [] self.prompt_tokens = 0 self.completion_tokens = 0 self.total_tokens = 0 self.step = 0 from arize.pandas.embeddings import EmbeddingGenerator, UseCases from arize.pandas.logger import Client self.generator = EmbeddingGenerator.from_use_case( use_case=UseCases.NLP.SEQUENCE_CLASSIFICATION, model_name="distilbert-base-uncased", tokenizer_max_length=512, batch_size=256, ) self.arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY) if SPACE_KEY == "SPACE_KEY" or API_KEY == "API_KEY": raise ValueError("❌ CHANGE SPACE AND API KEYS") else: print("✅ Arize client setup done! Now you can start using Arize!") def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: for prompt in prompts: self.prompt_records.append(prompt.replace("\n", "")) def on_llm_new_token(self, token: str, **kwargs: Any) -> None: """Do nothing.""" pass def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: pd = import_pandas() from arize.utils.types import ( EmbeddingColumnNames, Environments, ModelTypes, Schema, ) # Safe check if 'llm_output' and 'token_usage' exist if response.llm_output and "token_usage" in response.llm_output: self.prompt_tokens = response.llm_output["token_usage"].get( "prompt_tokens", 0 ) self.total_tokens = response.llm_output["token_usage"].get( "total_tokens", 0 ) self.completion_tokens = response.llm_output["token_usage"].get( "completion_tokens", 0 ) else: self.prompt_tokens = ( self.total_tokens ) = self.completion_tokens = 0 # assign default value for generations in response.generations: for generation in generations: prompt = self.prompt_records[self.step] self.step = self.step + 1 prompt_embedding = pd.Series( self.generator.generate_embeddings( text_col=pd.Series(prompt.replace("\n", " ")) ).reset_index(drop=True) ) # Assigning text to response_text instead of response response_text = generation.text.replace("\n", " ") response_embedding = pd.Series( self.generator.generate_embeddings( text_col=pd.Series(generation.text.replace("\n", " ")) ).reset_index(drop=True) ) pred_timestamp = datetime.now().timestamp() # Define the columns and data columns = [ "prediction_ts", "response", "prompt", "response_vector", "prompt_vector", "prompt_token", "completion_token", "total_token", ] data = [ [ pred_timestamp, response_text, prompt, response_embedding[0], prompt_embedding[0], self.prompt_tokens, self.total_tokens, self.completion_tokens, ] ] # Create the DataFrame df = pd.DataFrame(data, columns=columns) # Declare prompt and response columns prompt_columns = EmbeddingColumnNames( vector_column_name="prompt_vector", data_column_name="prompt" ) response_columns = EmbeddingColumnNames( vector_column_name="response_vector", data_column_name="response" ) schema = Schema( timestamp_column_name="prediction_ts", tag_column_names=[ "prompt_token", "completion_token", "total_token", ], prompt_column_names=prompt_columns, response_column_names=response_columns, ) response_from_arize = self.arize_client.log( dataframe=df, schema=schema, model_id=self.model_id, model_version=self.model_version, model_type=ModelTypes.GENERATIVE_LLM, environment=Environments.PRODUCTION, ) if response_from_arize.status_code == 200: print("✅ Successfully logged data to Arize!") else: print(f'❌ Logging failed "{response_from_arize.text}"') def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Do nothing.""" pass def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: pass def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Do nothing.""" pass def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Do nothing.""" pass def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any, ) -> None: pass def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Do nothing.""" pass def on_tool_end( self, output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any, ) -> None: pass def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: pass def on_text(self, text: str, **kwargs: Any) -> None: pass def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None: pass
langchain.callbacks.arize_callback.ArizeCallbackHandler¶ class langchain.callbacks.arize_callback.ArizeCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None)[source]¶ Bases: BaseCallbackHandler Callback Handler that logs to Arize. Initialize callback handler. Methods __init__([model_id, model_version, ...]) Initialize callback handler. on_agent_action(action, **kwargs) Do nothing. on_agent_finish(finish, **kwargs) Run on agent end. on_chain_end(outputs, **kwargs) Do nothing. on_chain_error(error, **kwargs) Do nothing. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Do nothing. on_llm_new_token(token, **kwargs) Do nothing. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run on arbitrary text. on_tool_end(output[, observation_prefix, ...]) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Do nothing. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Do nothing. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Do nothing. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run on arbitrary text. on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
Callback Handler that logs to Arize.
d343912f-c7cc-49e3-8e7b-cf6898c1b47b
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union" ]
langchain.callbacks.utils.import_spacy
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_spacy.html#langchain.callbacks.utils.import_spacy
def import_spacy() -> Any: """Import the spacy python package and raise an error if it is not installed.""" try: import spacy except ImportError: raise ImportError( "This callback manager requires the `spacy` python " "package installed. Please install it with `pip install spacy`" ) return spacy
langchain.callbacks.utils.import_spacy¶ langchain.callbacks.utils.import_spacy() → Any[source]¶ Import the spacy python package and raise an error if it is not installed.
Import the spacy python package and raise an error if it is not installed.
d90c31df-a0d2-45f8-a409-8eb28dd965e1
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union", "spacy" ]
langchain.callbacks.utils.import_pandas
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_pandas.html#langchain.callbacks.utils.import_pandas
def import_pandas() -> Any: """Import the pandas python package and raise an error if it is not installed.""" try: import pandas except ImportError: raise ImportError( "This callback manager requires the `pandas` python " "package installed. Please install it with `pip install pandas`" ) return pandas
langchain.callbacks.utils.import_pandas¶ langchain.callbacks.utils.import_pandas() → Any[source]¶ Import the pandas python package and raise an error if it is not installed.
Import the pandas python package and raise an error if it is not installed.
d5935d57-ba2b-41c1-81d8-645447e19fda
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union", "spacy", "pandas" ]
langchain.callbacks.utils.import_textstat
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.import_textstat.html#langchain.callbacks.utils.import_textstat
def import_textstat() -> Any: """Import the textstat python package and raise an error if it is not installed.""" try: import textstat except ImportError: raise ImportError( "This callback manager requires the `textstat` python " "package installed. Please install it with `pip install textstat`" ) return textstat
langchain.callbacks.utils.import_textstat¶ langchain.callbacks.utils.import_textstat() → Any[source]¶ Import the textstat python package and raise an error if it is not installed.
Import the textstat python package and raise an error if it is not installed.
06192fd7-7a8d-44ef-b431-763efc7833f2
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union", "spacy", "pandas", "textstat" ]
langchain.callbacks.utils.flatten_dict
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.flatten_dict.html#langchain.callbacks.utils.flatten_dict
def flatten_dict( nested_dict: Dict[str, Any], parent_key: str = "", sep: str = "_" ) -> Dict[str, Any]: """Flattens a nested dictionary into a flat dictionary. Parameters: nested_dict (dict): The nested dictionary to flatten. parent_key (str): The prefix to prepend to the keys of the flattened dict. sep (str): The separator to use between the parent key and the key of the flattened dictionary. Returns: (dict): A flat dictionary. """ flat_dict = {k: v for k, v in _flatten_dict(nested_dict, parent_key, sep)} return flat_dict
langchain.callbacks.utils.flatten_dict¶ langchain.callbacks.utils.flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '_') → Dict[str, Any][source]¶ Flattens a nested dictionary into a flat dictionary. Parameters nested_dict (dict) – The nested dictionary to flatten. parent_key (str) – The prefix to prepend to the keys of the flattened dict. sep (str) – The separator to use between the parent key and the key of the flattened dictionary. Returns A flat dictionary. Return type (dict)
Flattens a nested dictionary into a flat dictionary.
0a8bee0a-7231-4d24-b6c8-afd92b87d66c
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union", "spacy", "pandas", "textstat" ]
langchain.callbacks.utils.hash_string
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.hash_string.html#langchain.callbacks.utils.hash_string
def hash_string(s: str) -> str: """Hash a string using sha1. Parameters: s (str): The string to hash. Returns: (str): The hashed string. """ return hashlib.sha1(s.encode("utf-8")).hexdigest()
langchain.callbacks.utils.hash_string¶ langchain.callbacks.utils.hash_string(s: str) → str[source]¶ Hash a string using sha1. Parameters s (str) – The string to hash. Returns The hashed string. Return type (str)
Hash a string using sha1.
6c234ff7-52d2-4ae6-836f-aeda0203a474
[ "hashlib", "pathlib.Path", "typing.Any", "typing.Dict", "typing.Iterable", "typing.Tuple", "typing.Union", "spacy", "pandas", "textstat" ]
langchain.callbacks.utils.load_json
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.utils.load_json.html#langchain.callbacks.utils.load_json
def load_json(json_path: Union[str, Path]) -> str: """Load json file to a string. Parameters: json_path (str): The path to the json file. Returns: (str): The string representation of the json file. """ with open(json_path, "r") as f: data = f.read() return data
langchain.callbacks.utils.load_json¶ langchain.callbacks.utils.load_json(json_path: Union[str, Path]) → str[source]¶ Load json file to a string. Parameters json_path (str) – The path to the json file. Returns The string representation of the json file. Return type (str)
Load json file to a string.
7b0451e1-1441-49ef-857c-d0fcc7924a1c
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.get_openai_callback
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.get_openai_callback.html#langchain.callbacks.manager.get_openai_callback
@contextmanager def get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]: """Get the OpenAI callback handler in a context manager. which conveniently exposes token and cost information. Returns: OpenAICallbackHandler: The OpenAI callback handler. Example: >>> with get_openai_callback() as cb: ... # Use the OpenAI callback handler """ cb = OpenAICallbackHandler() openai_callback_var.set(cb) yield cb openai_callback_var.set(None)
langchain.callbacks.manager.get_openai_callback¶ langchain.callbacks.manager.get_openai_callback() → Generator[OpenAICallbackHandler, None, None][source]¶ Get the OpenAI callback handler in a context manager. which conveniently exposes token and cost information. Returns The OpenAI callback handler. Return type OpenAICallbackHandler Example >>> with get_openai_callback() as cb: ... # Use the OpenAI callback handler
Get the OpenAI callback handler in a context manager.
450741ee-aeba-4913-a12f-d57a2710907f
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.tracing_enabled
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.tracing_enabled.html#langchain.callbacks.manager.tracing_enabled
@contextmanager def tracing_enabled( session_name: str = "default", ) -> Generator[TracerSessionV1, None, None]: """Get the Deprecated LangChainTracer in a context manager. Args: session_name (str, optional): The name of the session. Defaults to "default". Returns: TracerSessionV1: The LangChainTracer session. Example: >>> with tracing_enabled() as session: ... # Use the LangChainTracer session """ cb = LangChainTracerV1() session = cast(TracerSessionV1, cb.load_session(session_name)) tracing_callback_var.set(cb) yield session tracing_callback_var.set(None)
langchain.callbacks.manager.tracing_enabled¶ langchain.callbacks.manager.tracing_enabled(session_name: str = 'default') → Generator[TracerSessionV1, None, None][source]¶ Get the Deprecated LangChainTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to “default”. Returns The LangChainTracer session. Return type TracerSessionV1 Example >>> with tracing_enabled() as session: ... # Use the LangChainTracer session
Get the Deprecated LangChainTracer in a context manager.
e982b1af-5d07-4e92-afac-99c77c0523cc
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.wandb_tracing_enabled
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.wandb_tracing_enabled.html#langchain.callbacks.manager.wandb_tracing_enabled
@contextmanager def wandb_tracing_enabled( session_name: str = "default", ) -> Generator[None, None, None]: """Get the WandbTracer in a context manager. Args: session_name (str, optional): The name of the session. Defaults to "default". Returns: None Example: >>> with wandb_tracing_enabled() as session: ... # Use the WandbTracer session """ cb = WandbTracer() wandb_tracing_callback_var.set(cb) yield None wandb_tracing_callback_var.set(None)
langchain.callbacks.manager.wandb_tracing_enabled¶ langchain.callbacks.manager.wandb_tracing_enabled(session_name: str = 'default') → Generator[None, None, None][source]¶ Get the WandbTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to “default”. Returns None Example >>> with wandb_tracing_enabled() as session: ... # Use the WandbTracer session
Get the WandbTracer in a context manager.
b2ba986e-acac-438b-bdea-466e97cd3793
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.tracing_v2_enabled
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.tracing_v2_enabled.html#langchain.callbacks.manager.tracing_v2_enabled
@contextmanager def tracing_v2_enabled( project_name: Optional[str] = None, *, example_id: Optional[Union[str, UUID]] = None, tags: Optional[List[str]] = None, ) -> Generator[None, None, None]: """Instruct LangChain to log all runs in context to LangSmith. Args: project_name (str, optional): The name of the project. Defaults to "default". example_id (str or UUID, optional): The ID of the example. Defaults to None. tags (List[str], optional): The tags to add to the run. Defaults to None. Returns: None Example: >>> with tracing_v2_enabled(): ... # LangChain code will automatically be traced """ if isinstance(example_id, str): example_id = UUID(example_id) cb = LangChainTracer( example_id=example_id, project_name=project_name, tags=tags, ) tracing_v2_callback_var.set(cb) yield tracing_v2_callback_var.set(None)
langchain.callbacks.manager.tracing_v2_enabled¶ langchain.callbacks.manager.tracing_v2_enabled(project_name: Optional[str] = None, *, example_id: Optional[Union[UUID, str]] = None, tags: Optional[List[str]] = None) → Generator[None, None, None][source]¶ Instruct LangChain to log all runs in context to LangSmith. Parameters project_name (str, optional) – The name of the project. Defaults to “default”. example_id (str or UUID, optional) – The ID of the example. Defaults to None. tags (List[str], optional) – The tags to add to the run. Defaults to None. Returns None Example >>> with tracing_v2_enabled(): ... # LangChain code will automatically be traced
Instruct LangChain to log all runs in context to LangSmith.
2d2e2b94-cb82-4640-a659-1364e8c567f1
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.trace_as_chain_group
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.trace_as_chain_group.html#langchain.callbacks.manager.trace_as_chain_group
@contextmanager def trace_as_chain_group( group_name: str, *, project_name: Optional[str] = None, example_id: Optional[Union[str, UUID]] = None, tags: Optional[List[str]] = None, ) -> Generator[CallbackManager, None, None]: """Get a callback manager for a chain group in a context manager. Useful for grouping different calls together as a single run even if they aren't composed in a single chain. Args: group_name (str): The name of the chain group. project_name (str, optional): The name of the project. Defaults to None. example_id (str or UUID, optional): The ID of the example. Defaults to None. tags (List[str], optional): The inheritable tags to apply to all runs. Defaults to None. Returns: CallbackManager: The callback manager for the chain group. Example: >>> with trace_as_chain_group("group_name") as manager: ... # Use the callback manager for the chain group ... llm.predict("Foo", callbacks=manager) """ cb = LangChainTracer( project_name=project_name, example_id=example_id, ) cm = CallbackManager.configure( inheritable_callbacks=[cb], inheritable_tags=tags, ) run_manager = cm.on_chain_start({"name": group_name}, {}) yield run_manager.get_child() run_manager.on_chain_end({})
langchain.callbacks.manager.trace_as_chain_group¶ langchain.callbacks.manager.trace_as_chain_group(group_name: str, *, project_name: Optional[str] = None, example_id: Optional[Union[UUID, str]] = None, tags: Optional[List[str]] = None) → Generator[CallbackManager, None, None][source]¶ Get a callback manager for a chain group in a context manager. Useful for grouping different calls together as a single run even if they aren’t composed in a single chain. Parameters group_name (str) – The name of the chain group. project_name (str, optional) – The name of the project. Defaults to None. example_id (str or UUID, optional) – The ID of the example. Defaults to None. tags (List[str], optional) – The inheritable tags to apply to all runs. Defaults to None. Returns The callback manager for the chain group. Return type CallbackManager Example >>> with trace_as_chain_group("group_name") as manager: ... # Use the callback manager for the chain group ... llm.predict("Foo", callbacks=manager)
Get a callback manager for a chain group in a context manager.
5cfb71c5-b03d-47ed-af4b-1ecdc6af8c72
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.BaseRunManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.BaseRunManager.html#langchain.callbacks.manager.BaseRunManager
class BaseRunManager(RunManagerMixin): """Base class for run manager (a bound callback manager).""" def __init__( self, *, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, ) -> None: """Initialize the run manager. Args: run_id (UUID): The ID of the run. handlers (List[BaseCallbackHandler]): The list of handlers. inheritable_handlers (List[BaseCallbackHandler]): The list of inheritable handlers. parent_run_id (UUID, optional): The ID of the parent run. Defaults to None. tags (Optional[List[str]]): The list of tags. inheritable_tags (Optional[List[str]]): The list of inheritable tags. metadata (Optional[Dict[str, Any]]): The metadata. inheritable_metadata (Optional[Dict[str, Any]]): The inheritable metadata. """ self.run_id = run_id self.handlers = handlers self.inheritable_handlers = inheritable_handlers self.parent_run_id = parent_run_id self.tags = tags or [] self.inheritable_tags = inheritable_tags or [] self.metadata = metadata or {} self.inheritable_metadata = inheritable_metadata or {} @classmethod def get_noop_manager(cls: Type[BRM]) -> BRM: """Return a manager that doesn't perform any operations. Returns: BaseRunManager: The noop manager. """ return cls( run_id=uuid4(), handlers=[], inheritable_handlers=[], tags=[], inheritable_tags=[], metadata={}, inheritable_metadata={}, )
langchain.callbacks.manager.BaseRunManager¶ class langchain.callbacks.manager.BaseRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: RunManagerMixin Base class for run manager (a bound callback manager). Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. classmethod get_noop_manager() → BRM[source]¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text.
Base class for run manager (a bound callback manager).
742bc084-e434-4e18-87c3-09f65d3ecb32
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.RunManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.RunManager.html#langchain.callbacks.manager.RunManager
class RunManager(BaseRunManager): """Sync Run Manager.""" def on_text( self, text: str, **kwargs: Any, ) -> Any: """Run when text is received. Args: text (str): The received text. Returns: Any: The result of the callback. """ _handle_event( self.handlers, "on_text", None, text, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.RunManager¶ class langchain.callbacks.manager.RunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseRunManager Sync Run Manager. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_text(text: str, **kwargs: Any) → Any[source]¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Sync Run Manager.
dcd9318c-5ba3-4f57-9f9c-20f04198f56c
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.ParentRunManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.ParentRunManager.html#langchain.callbacks.manager.ParentRunManager
class ParentRunManager(RunManager): """Sync Parent Run Manager.""" def get_child(self, tag: Optional[str] = None) -> CallbackManager: """Get a child callback manager. Args: tag (str, optional): The tag for the child callback manager. Defaults to None. Returns: CallbackManager: The child callback manager. """ manager = CallbackManager(handlers=[], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) manager.add_tags(self.inheritable_tags) manager.add_metadata(self.inheritable_metadata) if tag is not None: manager.add_tags([tag], False) return manager
langchain.callbacks.manager.ParentRunManager¶ class langchain.callbacks.manager.ParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: RunManager Sync Parent Run Manager. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → CallbackManager[source]¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type CallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Sync Parent Run Manager.
6e25a554-5a1e-42e9-a257-1c9dcb596067
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncRunManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncRunManager.html#langchain.callbacks.manager.AsyncRunManager
class AsyncRunManager(BaseRunManager): """Async Run Manager.""" async def on_text( self, text: str, **kwargs: Any, ) -> Any: """Run when text is received. Args: text (str): The received text. Returns: Any: The result of the callback. """ await _ahandle_event( self.handlers, "on_text", None, text, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.AsyncRunManager¶ class langchain.callbacks.manager.AsyncRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseRunManager Async Run Manager. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_text(text: str, **kwargs: Any) → Any[source]¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Async Run Manager.
d4f5ace7-7bf2-4c6d-b759-d871dd78d620
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncParentRunManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncParentRunManager.html#langchain.callbacks.manager.AsyncParentRunManager
class AsyncParentRunManager(AsyncRunManager): """Async Parent Run Manager.""" def get_child(self, tag: Optional[str] = None) -> AsyncCallbackManager: """Get a child callback manager. Args: tag (str, optional): The tag for the child callback manager. Defaults to None. Returns: AsyncCallbackManager: The child callback manager. """ manager = AsyncCallbackManager(handlers=[], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) manager.add_tags(self.inheritable_tags) manager.add_metadata(self.inheritable_metadata) if tag is not None: manager.add_tags([tag], False) return manager
langchain.callbacks.manager.AsyncParentRunManager¶ class langchain.callbacks.manager.AsyncParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: AsyncRunManager Async Parent Run Manager. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → AsyncCallbackManager[source]¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type AsyncCallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Async Parent Run Manager.
cc7c5e5a-56c6-4629-997a-7b6789041af5
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.CallbackManagerForLLMRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForLLMRun.html#langchain.callbacks.manager.CallbackManagerForLLMRun
class CallbackManagerForLLMRun(RunManager, LLMManagerMixin): """Callback manager for LLM run.""" def on_llm_new_token( self, token: str, **kwargs: Any, ) -> None: """Run when LLM generates a new token. Args: token (str): The new token. """ _handle_event( self.handlers, "on_llm_new_token", "ignore_llm", token=token, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when LLM ends running. Args: response (LLMResult): The LLM result. """ _handle_event( self.handlers, "on_llm_end", "ignore_llm", response, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when LLM errors. Args: error (Exception or KeyboardInterrupt): The error. """ _handle_event( self.handlers, "on_llm_error", "ignore_llm", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.CallbackManagerForLLMRun¶ class langchain.callbacks.manager.CallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: RunManager, LLMManagerMixin Callback manager for LLM run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_noop_manager() Return a manager that doesn't perform any operations. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_text(text, **kwargs) Run when text is received. classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. Parameters response (LLMResult) – The LLM result. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. Parameters error (Exception or KeyboardInterrupt) – The error. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. Parameters token (str) – The new token. on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Callback manager for LLM run.
b1bf6a5b-bd0b-4f57-b07e-26df3fb50b0d
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncCallbackManagerForLLMRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForLLMRun.html#langchain.callbacks.manager.AsyncCallbackManagerForLLMRun
class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin): """Async callback manager for LLM run.""" async def on_llm_new_token( self, token: str, **kwargs: Any, ) -> None: """Run when LLM generates a new token. Args: token (str): The new token. """ await _ahandle_event( self.handlers, "on_llm_new_token", "ignore_llm", token, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when LLM ends running. Args: response (LLMResult): The LLM result. """ await _ahandle_event( self.handlers, "on_llm_end", "ignore_llm", response, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when LLM errors. Args: error (Exception or KeyboardInterrupt): The error. """ await _ahandle_event( self.handlers, "on_llm_error", "ignore_llm", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.AsyncCallbackManagerForLLMRun¶ class langchain.callbacks.manager.AsyncCallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: AsyncRunManager, LLMManagerMixin Async callback manager for LLM run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_noop_manager() Return a manager that doesn't perform any operations. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_text(text, **kwargs) Run when text is received. classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. Parameters response (LLMResult) – The LLM result. async on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. Parameters error (Exception or KeyboardInterrupt) – The error. async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. Parameters token (str) – The new token. async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Async callback manager for LLM run.
927f34b8-5608-4a7d-8a01-446c3283c6e7
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.CallbackManagerForChainRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForChainRun.html#langchain.callbacks.manager.CallbackManagerForChainRun
class CallbackManagerForChainRun(ParentRunManager, ChainManagerMixin): """Callback manager for chain run.""" def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Run when chain ends running. Args: outputs (Dict[str, Any]): The outputs of the chain. """ _handle_event( self.handlers, "on_chain_end", "ignore_chain", outputs, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when chain errors. Args: error (Exception or KeyboardInterrupt): The error. """ _handle_event( self.handlers, "on_chain_error", "ignore_chain", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run when agent action is received. Args: action (AgentAction): The agent action. Returns: Any: The result of the callback. """ _handle_event( self.handlers, "on_agent_action", "ignore_agent", action, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run when agent finish is received. Args: finish (AgentFinish): The agent finish. Returns: Any: The result of the callback. """ _handle_event( self.handlers, "on_agent_finish", "ignore_agent", finish, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.CallbackManagerForChainRun¶ class langchain.callbacks.manager.CallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: ParentRunManager, ChainManagerMixin Callback manager for chain run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_agent_action(action, **kwargs) Run when agent action is received. on_agent_finish(finish, **kwargs) Run when agent finish is received. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → CallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type CallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run when agent action is received. Parameters action (AgentAction) – The agent action. Returns The result of the callback. Return type Any on_agent_finish(finish: AgentFinish, **kwargs: Any) → Any[source]¶ Run when agent finish is received. Parameters finish (AgentFinish) – The agent finish. Returns The result of the callback. Return type Any on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. Parameters outputs (Dict[str, Any]) – The outputs of the chain. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. Parameters error (Exception or KeyboardInterrupt) – The error. on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Callback manager for chain run.
9fef017d-91b5-4564-836a-936f6dc8abd7
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncCallbackManagerForChainRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForChainRun.html#langchain.callbacks.manager.AsyncCallbackManagerForChainRun
class AsyncCallbackManagerForChainRun(AsyncParentRunManager, ChainManagerMixin): """Async callback manager for chain run.""" async def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Run when chain ends running. Args: outputs (Dict[str, Any]): The outputs of the chain. """ await _ahandle_event( self.handlers, "on_chain_end", "ignore_chain", outputs, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when chain errors. Args: error (Exception or KeyboardInterrupt): The error. """ await _ahandle_event( self.handlers, "on_chain_error", "ignore_chain", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run when agent action is received. Args: action (AgentAction): The agent action. Returns: Any: The result of the callback. """ await _ahandle_event( self.handlers, "on_agent_action", "ignore_agent", action, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run when agent finish is received. Args: finish (AgentFinish): The agent finish. Returns: Any: The result of the callback. """ await _ahandle_event( self.handlers, "on_agent_finish", "ignore_agent", finish, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.AsyncCallbackManagerForChainRun¶ class langchain.callbacks.manager.AsyncCallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: AsyncParentRunManager, ChainManagerMixin Async callback manager for chain run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_agent_action(action, **kwargs) Run when agent action is received. on_agent_finish(finish, **kwargs) Run when agent finish is received. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → AsyncCallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type AsyncCallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run when agent action is received. Parameters action (AgentAction) – The agent action. Returns The result of the callback. Return type Any async on_agent_finish(finish: AgentFinish, **kwargs: Any) → Any[source]¶ Run when agent finish is received. Parameters finish (AgentFinish) – The agent finish. Returns The result of the callback. Return type Any async on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. Parameters outputs (Dict[str, Any]) – The outputs of the chain. async on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. Parameters error (Exception or KeyboardInterrupt) – The error. async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Async callback manager for chain run.
ad6912e3-dc2d-45f6-aa17-6c2bff7bf8f4
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.CallbackManagerForToolRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForToolRun.html#langchain.callbacks.manager.CallbackManagerForToolRun
class CallbackManagerForToolRun(ParentRunManager, ToolManagerMixin): """Callback manager for tool run.""" def on_tool_end( self, output: str, **kwargs: Any, ) -> None: """Run when tool ends running. Args: output (str): The output of the tool. """ _handle_event( self.handlers, "on_tool_end", "ignore_agent", output, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when tool errors. Args: error (Exception or KeyboardInterrupt): The error. """ _handle_event( self.handlers, "on_tool_error", "ignore_agent", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.CallbackManagerForToolRun¶ class langchain.callbacks.manager.CallbackManagerForToolRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: ParentRunManager, ToolManagerMixin Callback manager for tool run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. get_child(tag: Optional[str] = None) → CallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type CallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. Parameters output (str) – The output of the tool. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. Parameters error (Exception or KeyboardInterrupt) – The error.
Callback manager for tool run.
e56206bc-76da-4f1f-bf88-8039b9c46a7c
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncCallbackManagerForToolRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForToolRun.html#langchain.callbacks.manager.AsyncCallbackManagerForToolRun
class AsyncCallbackManagerForToolRun(AsyncParentRunManager, ToolManagerMixin): """Async callback manager for tool run.""" async def on_tool_end(self, output: str, **kwargs: Any) -> None: """Run when tool ends running. Args: output (str): The output of the tool. """ await _ahandle_event( self.handlers, "on_tool_end", "ignore_agent", output, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when tool errors. Args: error (Exception or KeyboardInterrupt): The error. """ await _ahandle_event( self.handlers, "on_tool_error", "ignore_agent", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.AsyncCallbackManagerForToolRun¶ class langchain.callbacks.manager.AsyncCallbackManagerForToolRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: AsyncParentRunManager, ToolManagerMixin Async callback manager for tool run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_text(text, **kwargs) Run when text is received. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. get_child(tag: Optional[str] = None) → AsyncCallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type AsyncCallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any async on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. Parameters output (str) – The output of the tool. async on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. Parameters error (Exception or KeyboardInterrupt) – The error.
Async callback manager for tool run.
3e88ed4b-3b85-4604-a272-3ba7de7d6788
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.CallbackManagerForRetrieverRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManagerForRetrieverRun.html#langchain.callbacks.manager.CallbackManagerForRetrieverRun
class CallbackManagerForRetrieverRun(ParentRunManager, RetrieverManagerMixin): """Callback manager for retriever run.""" def on_retriever_end( self, documents: Sequence[Document], **kwargs: Any, ) -> None: """Run when retriever ends running.""" _handle_event( self.handlers, "on_retriever_end", "ignore_retriever", documents, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) def on_retriever_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when retriever errors.""" _handle_event( self.handlers, "on_retriever_error", "ignore_retriever", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.CallbackManagerForRetrieverRun¶ class langchain.callbacks.manager.CallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: ParentRunManager, RetrieverManagerMixin Callback manager for retriever run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_retriever_end(documents, **kwargs) Run when retriever ends running. on_retriever_error(error, **kwargs) Run when retriever errors. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → CallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type CallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager on_retriever_end(documents: Sequence[Document], **kwargs: Any) → None[source]¶ Run when retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when retriever errors. on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Callback manager for retriever run.
6af9e03b-da52-4373-a86f-946823b39017
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html#langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun
class AsyncCallbackManagerForRetrieverRun( AsyncParentRunManager, RetrieverManagerMixin, ): """Async callback manager for retriever run.""" async def on_retriever_end( self, documents: Sequence[Document], **kwargs: Any ) -> None: """Run when retriever ends running.""" await _ahandle_event( self.handlers, "on_retriever_end", "ignore_retriever", documents, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, ) async def on_retriever_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when retriever errors.""" await _ahandle_event( self.handlers, "on_retriever_error", "ignore_retriever", error, run_id=self.run_id, parent_run_id=self.parent_run_id, tags=self.tags, **kwargs, )
langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun¶ class langchain.callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: AsyncParentRunManager, RetrieverManagerMixin Async callback manager for retriever run. Initialize the run manager. Parameters run_id (UUID) – The ID of the run. handlers (List[BaseCallbackHandler]) – The list of handlers. inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. tags (Optional[List[str]]) – The list of tags. inheritable_tags (Optional[List[str]]) – The list of inheritable tags. metadata (Optional[Dict[str, Any]]) – The metadata. inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata. Methods __init__(*, run_id, handlers, ...[, ...]) Initialize the run manager. get_child([tag]) Get a child callback manager. get_noop_manager() Return a manager that doesn't perform any operations. on_retriever_end(documents, **kwargs) Run when retriever ends running. on_retriever_error(error, **kwargs) Run when retriever errors. on_text(text, **kwargs) Run when text is received. get_child(tag: Optional[str] = None) → AsyncCallbackManager¶ Get a child callback manager. Parameters tag (str, optional) – The tag for the child callback manager. Defaults to None. Returns The child callback manager. Return type AsyncCallbackManager classmethod get_noop_manager() → BRM¶ Return a manager that doesn’t perform any operations. Returns The noop manager. Return type BaseRunManager async on_retriever_end(documents: Sequence[Document], **kwargs: Any) → None[source]¶ Run when retriever ends running. async on_retriever_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when retriever errors. async on_text(text: str, **kwargs: Any) → Any¶ Run when text is received. Parameters text (str) – The received text. Returns The result of the callback. Return type Any
Async callback manager for retriever run.
4f89232c-7757-4fab-a045-483671a5f1fe
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.CallbackManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.CallbackManager.html#langchain.callbacks.manager.CallbackManager
class CallbackManager(BaseCallbackManager): """Callback manager that can be used to handle callbacks from langchain.""" def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any, ) -> List[CallbackManagerForLLMRun]: """Run when LLM starts running. Args: serialized (Dict[str, Any]): The serialized LLM. prompts (List[str]): The list of prompts. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: List[CallbackManagerForLLMRun]: A callback manager for each prompt as an LLM run. """ managers = [] for prompt in prompts: run_id_ = uuid4() _handle_event( self.handlers, "on_llm_start", "ignore_llm", serialized, [prompt], run_id=run_id_, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) managers.append( CallbackManagerForLLMRun( run_id=run_id_, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) ) return managers def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any, ) -> List[CallbackManagerForLLMRun]: """Run when LLM starts running. Args: serialized (Dict[str, Any]): The serialized LLM. messages (List[List[BaseMessage]]): The list of messages. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: List[CallbackManagerForLLMRun]: A callback manager for each list of messages as an LLM run. """ managers = [] for message_list in messages: run_id_ = uuid4() _handle_event( self.handlers, "on_chat_model_start", "ignore_chat_model", serialized, [message_list], run_id=run_id_, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) managers.append( CallbackManagerForLLMRun( run_id=run_id_, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) ) return managers def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForChainRun: """Run when chain starts running. Args: serialized (Dict[str, Any]): The serialized chain. inputs (Dict[str, Any]): The inputs to the chain. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: CallbackManagerForChainRun: The callback manager for the chain run. """ if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_chain_start", "ignore_chain", serialized, inputs, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return CallbackManagerForChainRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) def on_tool_start( self, serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForToolRun: """Run when tool starts running. Args: serialized (Dict[str, Any]): The serialized tool. input_str (str): The input to the tool. run_id (UUID, optional): The ID of the run. Defaults to None. parent_run_id (UUID, optional): The ID of the parent run. Defaults to None. Returns: CallbackManagerForToolRun: The callback manager for the tool run. """ if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_tool_start", "ignore_agent", serialized, input_str, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return CallbackManagerForToolRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) def on_retriever_start( self, serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForRetrieverRun: """Run when retriever starts running.""" if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_retriever_start", "ignore_retriever", serialized, query, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return CallbackManagerForRetrieverRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) @classmethod def configure( cls, inheritable_callbacks: Callbacks = None, local_callbacks: Callbacks = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None, ) -> CallbackManager: """Configure the callback manager. Args: inheritable_callbacks (Optional[Callbacks], optional): The inheritable callbacks. Defaults to None. local_callbacks (Optional[Callbacks], optional): The local callbacks. Defaults to None. verbose (bool, optional): Whether to enable verbose mode. Defaults to False. inheritable_tags (Optional[List[str]], optional): The inheritable tags. Defaults to None. local_tags (Optional[List[str]], optional): The local tags. Defaults to None. inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable metadata. Defaults to None. local_metadata (Optional[Dict[str, Any]], optional): The local metadata. Defaults to None. Returns: CallbackManager: The configured callback manager. """ return _configure( cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata, )
langchain.callbacks.manager.CallbackManager¶ class langchain.callbacks.manager.CallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseCallbackManager Callback manager that can be used to handle callbacks from langchain. Initialize callback manager. Methods __init__(handlers[, inheritable_handlers, ...]) Initialize callback manager. add_handler(handler[, inherit]) Add a handler to the callback manager. add_metadata(metadata[, inherit]) add_tags(tags[, inherit]) configure([inheritable_callbacks, ...]) Configure the callback manager. on_chain_start(serialized, inputs[, run_id]) Run when chain starts running. on_chat_model_start(serialized, messages, ...) Run when LLM starts running. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_start(serialized, query[, ...]) Run when retriever starts running. on_tool_start(serialized, input_str[, ...]) Run when tool starts running. remove_handler(handler) Remove a handler from the callback manager. remove_metadata(keys) remove_tags(tags) set_handler(handler[, inherit]) Set handler as the only handler on the callback manager. set_handlers(handlers[, inherit]) Set handlers as the only handlers on the callback manager. Attributes is_async Whether the callback manager is async. add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶ Add a handler to the callback manager. add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶ add_tags(tags: List[str], inherit: bool = True) → None¶ classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → CallbackManager[source]¶ Configure the callback manager. Parameters inheritable_callbacks (Optional[Callbacks], optional) – The inheritable callbacks. Defaults to None. local_callbacks (Optional[Callbacks], optional) – The local callbacks. Defaults to None. verbose (bool, optional) – Whether to enable verbose mode. Defaults to False. inheritable_tags (Optional[List[str]], optional) – The inheritable tags. Defaults to None. local_tags (Optional[List[str]], optional) – The local tags. Defaults to None. inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable metadata. Defaults to None. local_metadata (Optional[Dict[str, Any]], optional) – The local metadata. Defaults to None. Returns The configured callback manager. Return type CallbackManager on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForChainRun[source]¶ Run when chain starts running. Parameters serialized (Dict[str, Any]) – The serialized chain. inputs (Dict[str, Any]) – The inputs to the chain. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns The callback manager for the chain run. Return type CallbackManagerForChainRun on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → List[CallbackManagerForLLMRun][source]¶ Run when LLM starts running. Parameters serialized (Dict[str, Any]) – The serialized LLM. messages (List[List[BaseMessage]]) – The list of messages. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns A callback manager for eachlist of messages as an LLM run. Return type List[CallbackManagerForLLMRun] on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → List[CallbackManagerForLLMRun][source]¶ Run when LLM starts running. Parameters serialized (Dict[str, Any]) – The serialized LLM. prompts (List[str]) – The list of prompts. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns A callback manager for eachprompt as an LLM run. Return type List[CallbackManagerForLLMRun] on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForRetrieverRun[source]¶ Run when retriever starts running. on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForToolRun[source]¶ Run when tool starts running. Parameters serialized (Dict[str, Any]) – The serialized tool. input_str (str) – The input to the tool. run_id (UUID, optional) – The ID of the run. Defaults to None. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. Returns The callback manager for the tool run. Return type CallbackManagerForToolRun remove_handler(handler: BaseCallbackHandler) → None¶ Remove a handler from the callback manager. remove_metadata(keys: List[str]) → None¶ remove_tags(tags: List[str]) → None¶ set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶ Set handler as the only handler on the callback manager. set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶ Set handlers as the only handlers on the callback manager. property is_async: bool¶ Whether the callback manager is async.
Callback manager that can be used to handle callbacks from langchain.
7d606bdd-666a-489b-958c-e9e39e295b63
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.AsyncCallbackManager
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.AsyncCallbackManager.html#langchain.callbacks.manager.AsyncCallbackManager
class AsyncCallbackManager(BaseCallbackManager): """Async callback manager that can be used to handle callbacks from LangChain.""" @property def is_async(self) -> bool: """Return whether the handler is async.""" return True async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any, ) -> List[AsyncCallbackManagerForLLMRun]: """Run when LLM starts running. Args: serialized (Dict[str, Any]): The serialized LLM. prompts (List[str]): The list of prompts. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: List[AsyncCallbackManagerForLLMRun]: The list of async callback managers, one for each LLM Run corresponding to each prompt. """ tasks = [] managers = [] for prompt in prompts: run_id_ = uuid4() tasks.append( _ahandle_event( self.handlers, "on_llm_start", "ignore_llm", serialized, [prompt], run_id=run_id_, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) ) managers.append( AsyncCallbackManagerForLLMRun( run_id=run_id_, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) ) await asyncio.gather(*tasks) return managers async def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any, ) -> Any: """Run when LLM starts running. Args: serialized (Dict[str, Any]): The serialized LLM. messages (List[List[BaseMessage]]): The list of messages. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: List[AsyncCallbackManagerForLLMRun]: The list of async callback managers, one for each LLM Run corresponding to each inner message list. """ tasks = [] managers = [] for message_list in messages: run_id_ = uuid4() tasks.append( _ahandle_event( self.handlers, "on_chat_model_start", "ignore_chat_model", serialized, [message_list], run_id=run_id_, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) ) managers.append( AsyncCallbackManagerForLLMRun( run_id=run_id_, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) ) await asyncio.gather(*tasks) return managers async def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForChainRun: """Run when chain starts running. Args: serialized (Dict[str, Any]): The serialized chain. inputs (Dict[str, Any]): The inputs to the chain. run_id (UUID, optional): The ID of the run. Defaults to None. Returns: AsyncCallbackManagerForChainRun: The async callback manager for the chain run. """ if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_chain_start", "ignore_chain", serialized, inputs, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return AsyncCallbackManagerForChainRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) async def on_tool_start( self, serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForToolRun: """Run when tool starts running. Args: serialized (Dict[str, Any]): The serialized tool. input_str (str): The input to the tool. run_id (UUID, optional): The ID of the run. Defaults to None. parent_run_id (UUID, optional): The ID of the parent run. Defaults to None. Returns: AsyncCallbackManagerForToolRun: The async callback manager for the tool run. """ if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_tool_start", "ignore_agent", serialized, input_str, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return AsyncCallbackManagerForToolRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) async def on_retriever_start( self, serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForRetrieverRun: """Run when retriever starts running.""" if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_retriever_start", "ignore_retriever", serialized, query, run_id=run_id, parent_run_id=self.parent_run_id, tags=self.tags, metadata=self.metadata, **kwargs, ) return AsyncCallbackManagerForRetrieverRun( run_id=run_id, handlers=self.handlers, inheritable_handlers=self.inheritable_handlers, parent_run_id=self.parent_run_id, tags=self.tags, inheritable_tags=self.inheritable_tags, metadata=self.metadata, inheritable_metadata=self.inheritable_metadata, ) @classmethod def configure( cls, inheritable_callbacks: Callbacks = None, local_callbacks: Callbacks = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None, ) -> AsyncCallbackManager: """Configure the async callback manager. Args: inheritable_callbacks (Optional[Callbacks], optional): The inheritable callbacks. Defaults to None. local_callbacks (Optional[Callbacks], optional): The local callbacks. Defaults to None. verbose (bool, optional): Whether to enable verbose mode. Defaults to False. inheritable_tags (Optional[List[str]], optional): The inheritable tags. Defaults to None. local_tags (Optional[List[str]], optional): The local tags. Defaults to None. inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable metadata. Defaults to None. local_metadata (Optional[Dict[str, Any]], optional): The local metadata. Defaults to None. Returns: AsyncCallbackManager: The configured async callback manager. """ return _configure( cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata, )
langchain.callbacks.manager.AsyncCallbackManager¶ class langchain.callbacks.manager.AsyncCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseCallbackManager Async callback manager that can be used to handle callbacks from LangChain. Initialize callback manager. Methods __init__(handlers[, inheritable_handlers, ...]) Initialize callback manager. add_handler(handler[, inherit]) Add a handler to the callback manager. add_metadata(metadata[, inherit]) add_tags(tags[, inherit]) configure([inheritable_callbacks, ...]) Configure the async callback manager. on_chain_start(serialized, inputs[, run_id]) Run when chain starts running. on_chat_model_start(serialized, messages, ...) Run when LLM starts running. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_start(serialized, query[, ...]) Run when retriever starts running. on_tool_start(serialized, input_str[, ...]) Run when tool starts running. remove_handler(handler) Remove a handler from the callback manager. remove_metadata(keys) remove_tags(tags) set_handler(handler[, inherit]) Set handler as the only handler on the callback manager. set_handlers(handlers[, inherit]) Set handlers as the only handlers on the callback manager. Attributes is_async Return whether the handler is async. add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶ Add a handler to the callback manager. add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶ add_tags(tags: List[str], inherit: bool = True) → None¶ classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → AsyncCallbackManager[source]¶ Configure the async callback manager. Parameters inheritable_callbacks (Optional[Callbacks], optional) – The inheritable callbacks. Defaults to None. local_callbacks (Optional[Callbacks], optional) – The local callbacks. Defaults to None. verbose (bool, optional) – Whether to enable verbose mode. Defaults to False. inheritable_tags (Optional[List[str]], optional) – The inheritable tags. Defaults to None. local_tags (Optional[List[str]], optional) – The local tags. Defaults to None. inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable metadata. Defaults to None. local_metadata (Optional[Dict[str, Any]], optional) – The local metadata. Defaults to None. Returns The configured async callback manager. Return type AsyncCallbackManager async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForChainRun[source]¶ Run when chain starts running. Parameters serialized (Dict[str, Any]) – The serialized chain. inputs (Dict[str, Any]) – The inputs to the chain. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns The async callback managerfor the chain run. Return type AsyncCallbackManagerForChainRun async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → Any[source]¶ Run when LLM starts running. Parameters serialized (Dict[str, Any]) – The serialized LLM. messages (List[List[BaseMessage]]) – The list of messages. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns The list ofasync callback managers, one for each LLM Run corresponding to each inner message list. Return type List[AsyncCallbackManagerForLLMRun] async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → List[AsyncCallbackManagerForLLMRun][source]¶ Run when LLM starts running. Parameters serialized (Dict[str, Any]) – The serialized LLM. prompts (List[str]) – The list of prompts. run_id (UUID, optional) – The ID of the run. Defaults to None. Returns The list of asynccallback managers, one for each LLM Run corresponding to each prompt. Return type List[AsyncCallbackManagerForLLMRun] async on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForRetrieverRun[source]¶ Run when retriever starts running. async on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForToolRun[source]¶ Run when tool starts running. Parameters serialized (Dict[str, Any]) – The serialized tool. input_str (str) – The input to the tool. run_id (UUID, optional) – The ID of the run. Defaults to None. parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None. Returns The async callback managerfor the tool run. Return type AsyncCallbackManagerForToolRun remove_handler(handler: BaseCallbackHandler) → None¶ Remove a handler from the callback manager. remove_metadata(keys: List[str]) → None¶ remove_tags(tags: List[str]) → None¶ set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶ Set handler as the only handler on the callback manager. set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶ Set handlers as the only handlers on the callback manager. property is_async: bool¶ Return whether the handler is async.
Async callback manager that can be used to handle callbacks from LangChain.
c2f6ea80-a248-4458-b543-8407a97f61bb
[ "__future__.annotations", "asyncio", "functools", "logging", "os", "contextlib.asynccontextmanager", "contextlib.contextmanager", "contextvars.ContextVar", "typing.Any", "typing.AsyncGenerator", "typing.Dict", "typing.Generator", "typing.List", "typing.Optional", "typing.Sequence", "typing.Type", "typing.TypeVar", "typing.Union", "typing.cast", "uuid.UUID", "uuid.uuid4", "langchain", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.base.ChainManagerMixin", "langchain.callbacks.base.LLMManagerMixin", "langchain.callbacks.base.RetrieverManagerMixin", "langchain.callbacks.base.RunManagerMixin", "langchain.callbacks.base.ToolManagerMixin", "langchain.callbacks.openai_info.OpenAICallbackHandler", "langchain.callbacks.stdout.StdOutCallbackHandler", "langchain.callbacks.tracers.langchain.LangChainTracer", "langchain.callbacks.tracers.langchain_v1.LangChainTracerV1", "langchain.callbacks.tracers.langchain_v1.TracerSessionV1", "langchain.callbacks.tracers.stdout.ConsoleCallbackHandler", "langchain.callbacks.tracers.wandb.WandbTracer", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.Document", "langchain.schema.LLMResult", "langchain.schema.messages.BaseMessage", "langchain.schema.messages.get_buffer_string" ]
langchain.callbacks.manager.env_var_is_set
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.env_var_is_set.html#langchain.callbacks.manager.env_var_is_set
def env_var_is_set(env_var: str) -> bool: """Check if an environment variable is set. Args: env_var (str): The name of the environment variable. Returns: bool: True if the environment variable is set, False otherwise. """ return env_var in os.environ and os.environ[env_var] not in ( "", "0", "false", "False", )
langchain.callbacks.manager.env_var_is_set¶ langchain.callbacks.manager.env_var_is_set(env_var: str) → bool[source]¶ Check if an environment variable is set. Parameters env_var (str) – The name of the environment variable. Returns True if the environment variable is set, False otherwise. Return type bool
Check if an environment variable is set.
0934a1f2-7040-4654-b090-9fd58adafe28
[ "random", "string", "tempfile", "traceback", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "langchain.utils.get_from_dict_or_env" ]
langchain.callbacks.mlflow_callback.import_mlflow
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.import_mlflow.html#langchain.callbacks.mlflow_callback.import_mlflow
def import_mlflow() -> Any: """Import the mlflow python package and raise an error if it is not installed.""" try: import mlflow except ImportError: raise ImportError( "To use the mlflow callback manager you need to have the `mlflow` python " "package installed. Please install it with `pip install mlflow>=2.3.0`" ) return mlflow
langchain.callbacks.mlflow_callback.import_mlflow¶ langchain.callbacks.mlflow_callback.import_mlflow() → Any[source]¶ Import the mlflow python package and raise an error if it is not installed.
Import the mlflow python package and raise an error if it is not installed.
1cd12f61-ebb9-42cb-9801-baebbf404a20
[ "random", "string", "tempfile", "traceback", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "langchain.utils.get_from_dict_or_env", "mlflow" ]
langchain.callbacks.mlflow_callback.analyze_text
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.analyze_text.html#langchain.callbacks.mlflow_callback.analyze_text
def analyze_text( text: str, nlp: Any = None, ) -> dict: """Analyze text using textstat and spacy. Parameters: text (str): The text to analyze. nlp (spacy.lang): The spacy language model to use for visualization. Returns: (dict): A dictionary containing the complexity metrics and visualization files serialized to HTML string. """ resp: Dict[str, Any] = {} textstat = import_textstat() spacy = import_spacy() text_complexity_metrics = { "flesch_reading_ease": textstat.flesch_reading_ease(text), "flesch_kincaid_grade": textstat.flesch_kincaid_grade(text), "smog_index": textstat.smog_index(text), "coleman_liau_index": textstat.coleman_liau_index(text), "automated_readability_index": textstat.automated_readability_index(text), "dale_chall_readability_score": textstat.dale_chall_readability_score(text), "difficult_words": textstat.difficult_words(text), "linsear_write_formula": textstat.linsear_write_formula(text), "gunning_fog": textstat.gunning_fog(text), # "text_standard": textstat.text_standard(text), "fernandez_huerta": textstat.fernandez_huerta(text), "szigriszt_pazos": textstat.szigriszt_pazos(text), "gutierrez_polini": textstat.gutierrez_polini(text), "crawford": textstat.crawford(text), "gulpease_index": textstat.gulpease_index(text), "osman": textstat.osman(text), } resp.update({"text_complexity_metrics": text_complexity_metrics}) resp.update(text_complexity_metrics) if nlp is not None: doc = nlp(text) dep_out = spacy.displacy.render( # type: ignore doc, style="dep", jupyter=False, page=True ) ent_out = spacy.displacy.render( # type: ignore doc, style="ent", jupyter=False, page=True ) text_visualizations = { "dependency_tree": dep_out, "entities": ent_out, } resp.update(text_visualizations) return resp
langchain.callbacks.mlflow_callback.analyze_text¶ langchain.callbacks.mlflow_callback.analyze_text(text: str, nlp: Any = None) → dict[source]¶ Analyze text using textstat and spacy. Parameters text (str) – The text to analyze. nlp (spacy.lang) – The spacy language model to use for visualization. Returns A dictionary containing the complexity metrics and visualizationfiles serialized to HTML string. Return type (dict)
Analyze text using textstat and spacy.
735a14d8-b08a-433d-a63b-3ef5183febe6
[ "random", "string", "tempfile", "traceback", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "langchain.utils.get_from_dict_or_env", "mlflow" ]
langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation.html#langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation
def construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any: """Construct an html element from a prompt and a generation. Parameters: prompt (str): The prompt. generation (str): The generation. Returns: (str): The html string.""" formatted_prompt = prompt.replace("\n", "<br>") formatted_generation = generation.replace("\n", "<br>") return f""" <p style="color:black;">{formatted_prompt}:</p> <blockquote> <p style="color:green;"> {formatted_generation} </p> </blockquote> """
langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation¶ langchain.callbacks.mlflow_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) → Any[source]¶ Construct an html element from a prompt and a generation. Parameters prompt (str) – The prompt. generation (str) – The generation. Returns The html string. Return type (str)
Construct an html element from a prompt and a generation.
497d9735-7dc0-4087-83ca-f79cf8d6c20a
[ "random", "string", "tempfile", "traceback", "copy.deepcopy", "pathlib.Path", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.utils.BaseMetadataCallbackHandler", "langchain.callbacks.utils.flatten_dict", "langchain.callbacks.utils.hash_string", "langchain.callbacks.utils.import_pandas", "langchain.callbacks.utils.import_spacy", "langchain.callbacks.utils.import_textstat", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "langchain.utils.get_from_dict_or_env", "mlflow" ]
langchain.callbacks.mlflow_callback.MlflowCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.mlflow_callback.MlflowCallbackHandler.html#langchain.callbacks.mlflow_callback.MlflowCallbackHandler
class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler): """Callback Handler that logs metrics and artifacts to mlflow server. Parameters: name (str): Name of the run. experiment (str): Name of the experiment. tags (dict): Tags to be attached for the run. tracking_uri (str): MLflow tracking server uri. This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to mlflow server. """ def __init__( self, name: Optional[str] = "langchainrun-%", experiment: Optional[str] = "langchain", tags: Optional[Dict] = {}, tracking_uri: Optional[str] = None, ) -> None: """Initialize callback handler.""" import_pandas() import_textstat() import_mlflow() spacy = import_spacy() super().__init__() self.name = name self.experiment = experiment self.tags = tags self.tracking_uri = tracking_uri self.temp_dir = tempfile.TemporaryDirectory() self.mlflg = MlflowLogger( tracking_uri=self.tracking_uri, experiment_name=self.experiment, run_name=self.name, run_tags=self.tags, ) self.action_records: list = [] self.nlp = spacy.load("en_core_web_sm") self.metrics = { "step": 0, "starts": 0, "ends": 0, "errors": 0, "text_ctr": 0, "chain_starts": 0, "chain_ends": 0, "llm_starts": 0, "llm_ends": 0, "llm_streams": 0, "tool_starts": 0, "tool_ends": 0, "agent_ends": 0, } self.records: Dict[str, Any] = { "on_llm_start_records": [], "on_llm_token_records": [], "on_llm_end_records": [], "on_chain_start_records": [], "on_chain_end_records": [], "on_tool_start_records": [], "on_tool_end_records": [], "on_text_records": [], "on_agent_finish_records": [], "on_agent_action_records": [], "action_records": [], } def _reset(self) -> None: for k, v in self.metrics.items(): self.metrics[k] = 0 for k, v in self.records.items(): self.records[k] = [] def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: """Run when LLM starts.""" self.metrics["step"] += 1 self.metrics["llm_starts"] += 1 self.metrics["starts"] += 1 llm_starts = self.metrics["llm_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_start"}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) for idx, prompt in enumerate(prompts): prompt_resp = deepcopy(resp) prompt_resp["prompt"] = prompt self.records["on_llm_start_records"].append(prompt_resp) self.records["action_records"].append(prompt_resp) self.mlflg.jsonf(prompt_resp, f"llm_start_{llm_starts}_prompt_{idx}") def on_llm_new_token(self, token: str, **kwargs: Any) -> None: """Run when LLM generates a new token.""" self.metrics["step"] += 1 self.metrics["llm_streams"] += 1 llm_streams = self.metrics["llm_streams"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_new_token", "token": token}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_llm_token_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"llm_new_tokens_{llm_streams}") def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when LLM ends running.""" self.metrics["step"] += 1 self.metrics["llm_ends"] += 1 self.metrics["ends"] += 1 llm_ends = self.metrics["llm_ends"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_end"}) resp.update(flatten_dict(response.llm_output or {})) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) for generations in response.generations: for idx, generation in enumerate(generations): generation_resp = deepcopy(resp) generation_resp.update(flatten_dict(generation.dict())) generation_resp.update( analyze_text( generation.text, nlp=self.nlp, ) ) complexity_metrics: Dict[str, float] = generation_resp.pop("text_complexity_metrics") # type: ignore # noqa: E501 self.mlflg.metrics( complexity_metrics, step=self.metrics["step"], ) self.records["on_llm_end_records"].append(generation_resp) self.records["action_records"].append(generation_resp) self.mlflg.jsonf(resp, f"llm_end_{llm_ends}_generation_{idx}") dependency_tree = generation_resp["dependency_tree"] entities = generation_resp["entities"] self.mlflg.html(dependency_tree, "dep-" + hash_string(generation.text)) self.mlflg.html(entities, "ent-" + hash_string(generation.text)) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when LLM errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: """Run when chain starts running.""" self.metrics["step"] += 1 self.metrics["chain_starts"] += 1 self.metrics["starts"] += 1 chain_starts = self.metrics["chain_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_chain_start"}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) chain_input = ",".join([f"{k}={v}" for k, v in inputs.items()]) input_resp = deepcopy(resp) input_resp["inputs"] = chain_input self.records["on_chain_start_records"].append(input_resp) self.records["action_records"].append(input_resp) self.mlflg.jsonf(input_resp, f"chain_start_{chain_starts}") def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Run when chain ends running.""" self.metrics["step"] += 1 self.metrics["chain_ends"] += 1 self.metrics["ends"] += 1 chain_ends = self.metrics["chain_ends"] resp: Dict[str, Any] = {} chain_output = ",".join([f"{k}={v}" for k, v in outputs.items()]) resp.update({"action": "on_chain_end", "outputs": chain_output}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_chain_end_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"chain_end_{chain_ends}") def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when chain errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> None: """Run when tool starts running.""" self.metrics["step"] += 1 self.metrics["tool_starts"] += 1 self.metrics["starts"] += 1 tool_starts = self.metrics["tool_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_tool_start", "input_str": input_str}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_tool_start_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"tool_start_{tool_starts}") def on_tool_end(self, output: str, **kwargs: Any) -> None: """Run when tool ends running.""" self.metrics["step"] += 1 self.metrics["tool_ends"] += 1 self.metrics["ends"] += 1 tool_ends = self.metrics["tool_ends"] resp: Dict[str, Any] = {} resp.update({"action": "on_tool_end", "output": output}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_tool_end_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"tool_end_{tool_ends}") def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when tool errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_text(self, text: str, **kwargs: Any) -> None: """ Run when agent is ending. """ self.metrics["step"] += 1 self.metrics["text_ctr"] += 1 text_ctr = self.metrics["text_ctr"] resp: Dict[str, Any] = {} resp.update({"action": "on_text", "text": text}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_text_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"on_text_{text_ctr}") def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None: """Run when agent ends running.""" self.metrics["step"] += 1 self.metrics["agent_ends"] += 1 self.metrics["ends"] += 1 agent_ends = self.metrics["agent_ends"] resp: Dict[str, Any] = {} resp.update( { "action": "on_agent_finish", "output": finish.return_values["output"], "log": finish.log, } ) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_agent_finish_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"agent_finish_{agent_ends}") def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run on agent action.""" self.metrics["step"] += 1 self.metrics["tool_starts"] += 1 self.metrics["starts"] += 1 tool_starts = self.metrics["tool_starts"] resp: Dict[str, Any] = {} resp.update( { "action": "on_agent_action", "tool": action.tool, "tool_input": action.tool_input, "log": action.log, } ) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_agent_action_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"agent_action_{tool_starts}") def _create_session_analysis_df(self) -> Any: """Create a dataframe with all the information from the session.""" pd = import_pandas() on_llm_start_records_df = pd.DataFrame(self.records["on_llm_start_records"]) on_llm_end_records_df = pd.DataFrame(self.records["on_llm_end_records"]) llm_input_columns = ["step", "prompt"] if "name" in on_llm_start_records_df.columns: llm_input_columns.append("name") elif "id" in on_llm_start_records_df.columns: # id is llm class's full import path. For example: # ["langchain", "llms", "openai", "AzureOpenAI"] on_llm_start_records_df["name"] = on_llm_start_records_df["id"].apply( lambda id_: id_[-1] ) llm_input_columns.append("name") llm_input_prompts_df = ( on_llm_start_records_df[llm_input_columns] .dropna(axis=1) .rename({"step": "prompt_step"}, axis=1) ) complexity_metrics_columns = [] visualizations_columns = [] complexity_metrics_columns = [ "flesch_reading_ease", "flesch_kincaid_grade", "smog_index", "coleman_liau_index", "automated_readability_index", "dale_chall_readability_score", "difficult_words", "linsear_write_formula", "gunning_fog", # "text_standard", "fernandez_huerta", "szigriszt_pazos", "gutierrez_polini", "crawford", "gulpease_index", "osman", ] visualizations_columns = ["dependency_tree", "entities"] llm_outputs_df = ( on_llm_end_records_df[ [ "step", "text", "token_usage_total_tokens", "token_usage_prompt_tokens", "token_usage_completion_tokens", ] + complexity_metrics_columns + visualizations_columns ] .dropna(axis=1) .rename({"step": "output_step", "text": "output"}, axis=1) ) session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1) session_analysis_df["chat_html"] = session_analysis_df[ ["prompt", "output"] ].apply( lambda row: construct_html_from_prompt_and_generation( row["prompt"], row["output"] ), axis=1, ) return session_analysis_df def flush_tracker(self, langchain_asset: Any = None, finish: bool = False) -> None: pd = import_pandas() self.mlflg.table("action_records", pd.DataFrame(self.records["action_records"])) session_analysis_df = self._create_session_analysis_df() chat_html = session_analysis_df.pop("chat_html") chat_html = chat_html.replace("\n", "", regex=True) self.mlflg.table("session_analysis", pd.DataFrame(session_analysis_df)) self.mlflg.html("".join(chat_html.tolist()), "chat_html") if langchain_asset: # To avoid circular import error # mlflow only supports LLMChain asset if "langchain.chains.llm.LLMChain" in str(type(langchain_asset)): self.mlflg.langchain_artifact(langchain_asset) else: langchain_asset_path = str(Path(self.temp_dir.name, "model.json")) try: langchain_asset.save(langchain_asset_path) self.mlflg.artifact(langchain_asset_path) except ValueError: try: langchain_asset.save_agent(langchain_asset_path) self.mlflg.artifact(langchain_asset_path) except AttributeError: print("Could not save model.") traceback.print_exc() pass except NotImplementedError: print("Could not save model.") traceback.print_exc() pass except NotImplementedError: print("Could not save model.") traceback.print_exc() pass if finish: self.mlflg.finish_run() self._reset()
langchain.callbacks.mlflow_callback.MlflowCallbackHandler¶ class langchain.callbacks.mlflow_callback.MlflowCallbackHandler(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = {}, tracking_uri: Optional[str] = None)[source]¶ Bases: BaseMetadataCallbackHandler, BaseCallbackHandler Callback Handler that logs metrics and artifacts to mlflow server. Parameters name (str) – Name of the run. experiment (str) – Name of the experiment. tags (dict) – Tags to be attached for the run. tracking_uri (str) – MLflow tracking server uri. This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to mlflow server. Initialize callback handler. Methods __init__([name, experiment, tags, tracking_uri]) Initialize callback handler. flush_tracker([langchain_asset, finish]) get_custom_callback_meta() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run when agent ends running. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run when agent is ending. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. reset_callback_meta() Reset the callback metadata. Attributes always_verbose Whether to call verbose callbacks even if verbose is False. ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline flush_tracker(langchain_asset: Any = None, finish: bool = False) → None[source]¶ get_custom_callback_meta() → Dict[str, Any]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run when agent ends running. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run when agent is ending. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. reset_callback_meta() → None¶ Reset the callback metadata. property always_verbose: bool¶ Whether to call verbose callbacks even if verbose is False. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
Callback Handler that logs metrics and artifacts to mlflow server.
33fe7421-d71f-4ed2-a8be-ae884f06a6f1
[ "__future__.annotations", "enum.Enum", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.NamedTuple", "typing.Optional", "streamlit.delta_generator.DeltaGenerator", "streamlit.type_util.SupportsStr" ]
langchain.callbacks.streamlit.mutable_expander.ChildType
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.mutable_expander.ChildType.html#langchain.callbacks.streamlit.mutable_expander.ChildType
class ChildType(Enum): """The enumerator of the child type.""" MARKDOWN = "MARKDOWN" EXCEPTION = "EXCEPTION"
langchain.callbacks.streamlit.mutable_expander.ChildType¶ class langchain.callbacks.streamlit.mutable_expander.ChildType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶ Bases: Enum The enumerator of the child type. Attributes MARKDOWN EXCEPTION EXCEPTION = 'EXCEPTION'¶ MARKDOWN = 'MARKDOWN'¶
The enumerator of the child type.
354c5bd9-1576-4f83-8f98-d687ffa37131
[ "__future__.annotations", "enum.Enum", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.NamedTuple", "typing.Optional", "streamlit.delta_generator.DeltaGenerator", "streamlit.type_util.SupportsStr" ]
langchain.callbacks.streamlit.mutable_expander.ChildRecord
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.mutable_expander.ChildRecord.html#langchain.callbacks.streamlit.mutable_expander.ChildRecord
class ChildRecord(NamedTuple): """The child record as a NamedTuple.""" type: ChildType kwargs: Dict[str, Any] dg: DeltaGenerator
langchain.callbacks.streamlit.mutable_expander.ChildRecord¶ class langchain.callbacks.streamlit.mutable_expander.ChildRecord(type: ChildType, kwargs: Dict[str, Any], dg: DeltaGenerator)[source]¶ Bases: NamedTuple The child record as a NamedTuple. Create new instance of ChildRecord(type, kwargs, dg) Methods __init__() count(value, /) Return number of occurrences of value. index(value[, start, stop]) Return first index of value. Attributes dg Alias for field number 2 kwargs Alias for field number 1 type Alias for field number 0 count(value, /)¶ Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /)¶ Return first index of value. Raises ValueError if the value is not present. dg: DeltaGenerator¶ Alias for field number 2 kwargs: Dict[str, Any]¶ Alias for field number 1 type: ChildType¶ Alias for field number 0
The child record as a NamedTuple.
9bda9b45-6dbc-40c8-a7b2-2a97323db235
[ "__future__.annotations", "enum.Enum", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.NamedTuple", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.streamlit.mutable_expander.MutableExpander", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "streamlit.delta_generator.DeltaGenerator" ]
langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState.html#langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState
class LLMThoughtState(Enum): """Enumerator of the LLMThought state.""" # The LLM is thinking about what to do next. We don't know which tool we'll run. THINKING = "THINKING" # The LLM has decided to run a tool. We don't have results from the tool yet. RUNNING_TOOL = "RUNNING_TOOL" # We have results from the tool. COMPLETE = "COMPLETE"
langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState¶ class langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶ Bases: Enum Enumerator of the LLMThought state. Attributes THINKING RUNNING_TOOL COMPLETE COMPLETE = 'COMPLETE'¶ RUNNING_TOOL = 'RUNNING_TOOL'¶ THINKING = 'THINKING'¶
Enumerator of the LLMThought state.
576aafcb-f961-46ad-a0db-606df498b3d8
[ "__future__.annotations", "enum.Enum", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.NamedTuple", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.streamlit.mutable_expander.MutableExpander", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "streamlit.delta_generator.DeltaGenerator" ]
langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord.html#langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord
class ToolRecord(NamedTuple): """The tool record as a NamedTuple.""" name: str input_str: str
langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord¶ class langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord(name: str, input_str: str)[source]¶ Bases: NamedTuple The tool record as a NamedTuple. Create new instance of ToolRecord(name, input_str) Methods __init__() count(value, /) Return number of occurrences of value. index(value[, start, stop]) Return first index of value. Attributes input_str Alias for field number 1 name Alias for field number 0 count(value, /)¶ Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /)¶ Return first index of value. Raises ValueError if the value is not present. input_str: str¶ Alias for field number 1 name: str¶ Alias for field number 0
The tool record as a NamedTuple.
8d5e483f-39c1-46cf-81ae-d80ff80f3fa2
[ "__future__.annotations", "enum.Enum", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.NamedTuple", "typing.Optional", "typing.Union", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.streamlit.mutable_expander.MutableExpander", "langchain.schema.AgentAction", "langchain.schema.AgentFinish", "langchain.schema.LLMResult", "streamlit.delta_generator.DeltaGenerator" ]
langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html#langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler
class StreamlitCallbackHandler(BaseCallbackHandler): """A callback handler that writes to a Streamlit app.""" def __init__( self, parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None, ): """Create a StreamlitCallbackHandler instance. Parameters ---------- parent_container The `st.container` that will contain all the Streamlit elements that the Handler creates. max_thought_containers The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a "History" expander. Defaults to 4. expand_new_thoughts Each LLM "thought" gets its own `st.expander`. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. """ self._parent_container = parent_container self._history_parent = parent_container.container() self._history_container: Optional[MutableExpander] = None self._current_thought: Optional[LLMThought] = None self._completed_thoughts: List[LLMThought] = [] self._max_thought_containers = max(max_thought_containers, 1) self._expand_new_thoughts = expand_new_thoughts self._collapse_completed_thoughts = collapse_completed_thoughts self._thought_labeler = thought_labeler or LLMThoughtLabeler() def _require_current_thought(self) -> LLMThought: """Return our current LLMThought. Raise an error if we have no current thought. """ if self._current_thought is None: raise RuntimeError("Current LLMThought is unexpectedly None!") return self._current_thought def _get_last_completed_thought(self) -> Optional[LLMThought]: """Return our most recent completed LLMThought, or None if we don't have one.""" if len(self._completed_thoughts) > 0: return self._completed_thoughts[len(self._completed_thoughts) - 1] return None @property def _num_thought_containers(self) -> int: """The number of 'thought containers' we're currently showing: the number of completed thought containers, the history container (if it exists), and the current thought container (if it exists). """ count = len(self._completed_thoughts) if self._history_container is not None: count += 1 if self._current_thought is not None: count += 1 return count def _complete_current_thought(self, final_label: Optional[str] = None) -> None: """Complete the current thought, optionally assigning it a new label. Add it to our _completed_thoughts list. """ thought = self._require_current_thought() thought.complete(final_label) self._completed_thoughts.append(thought) self._current_thought = None def _prune_old_thought_containers(self) -> None: """If we have too many thoughts onscreen, move older thoughts to the 'history container.' """ while ( self._num_thought_containers > self._max_thought_containers and len(self._completed_thoughts) > 0 ): # Create our history container if it doesn't exist, and if # max_thought_containers is > 1. (if max_thought_containers is 1, we don't # have room to show history.) if self._history_container is None and self._max_thought_containers > 1: self._history_container = MutableExpander( self._history_parent, label=self._thought_labeler.get_history_label(), expanded=False, ) oldest_thought = self._completed_thoughts.pop(0) if self._history_container is not None: self._history_container.markdown(oldest_thought.container.label) self._history_container.append_copy(oldest_thought.container) oldest_thought.clear() def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: if self._current_thought is None: self._current_thought = LLMThought( parent_container=self._parent_container, expanded=self._expand_new_thoughts, collapse_on_complete=self._collapse_completed_thoughts, labeler=self._thought_labeler, ) self._current_thought.on_llm_start(serialized, prompts) # We don't prune_old_thought_containers here, because our container won't # be visible until it has a child. def on_llm_new_token(self, token: str, **kwargs: Any) -> None: self._require_current_thought().on_llm_new_token(token, **kwargs) self._prune_old_thought_containers() def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: self._require_current_thought().on_llm_end(response, **kwargs) self._prune_old_thought_containers() def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: self._require_current_thought().on_llm_error(error, **kwargs) self._prune_old_thought_containers() def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> None: self._require_current_thought().on_tool_start(serialized, input_str, **kwargs) self._prune_old_thought_containers() def on_tool_end( self, output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any, ) -> None: self._require_current_thought().on_tool_end( output, color, observation_prefix, llm_prefix, **kwargs ) self._complete_current_thought() def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: self._require_current_thought().on_tool_error(error, **kwargs) self._prune_old_thought_containers() def on_text( self, text: str, color: Optional[str] = None, end: str = "", **kwargs: Any, ) -> None: pass def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: pass def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: pass def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: pass def on_agent_action( self, action: AgentAction, color: Optional[str] = None, **kwargs: Any ) -> Any: self._require_current_thought().on_agent_action(action, color, **kwargs) self._prune_old_thought_containers() def on_agent_finish( self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any ) -> None: if self._current_thought is not None: self._current_thought.complete( self._thought_labeler.get_final_agent_thought_label() ) self._current_thought = None
langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler¶ class langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None)[source]¶ Bases: BaseCallbackHandler A callback handler that writes to a Streamlit app. Create a StreamlitCallbackHandler instance. Parameters parent_container – The st.container that will contain all the Streamlit elements that the Handler creates. max_thought_containers – The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a “History” expander. Defaults to 4. expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Methods __init__(parent_container, *[, ...]) Create a StreamlitCallbackHandler instance. on_agent_action(action[, color]) Run on agent action. on_agent_finish(finish[, color]) Run on agent end. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text[, color, end]) Run on arbitrary text. on_tool_end(output[, color, ...]) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶ Run on arbitrary text. on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
A callback handler that writes to a Streamlit app.
f99096fb-59fd-4aad-8c91-fb3631ffdd81
[ "__future__.annotations", "typing.TYPE_CHECKING", "typing.Optional", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtLabeler", "langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler", "streamlit.delta_generator.DeltaGenerator" ]
langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler.html#langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler
def StreamlitCallbackHandler( parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None, ) -> BaseCallbackHandler: """Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards use with a LangChain Agent; it displays the Agent's LLM and tool-usage "thoughts" inside a series of Streamlit expanders. Parameters ---------- parent_container The `st.container` that will contain all the Streamlit elements that the Handler creates. max_thought_containers The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a "History" expander. Defaults to 4. expand_new_thoughts Each LLM "thought" gets its own `st.expander`. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Returns ------- A new StreamlitCallbackHandler instance. Note that this is an "auto-updating" API: if the installed version of Streamlit has a more recent StreamlitCallbackHandler implementation, an instance of that class will be used. """ # If we're using a version of Streamlit that implements StreamlitCallbackHandler, # delegate to it instead of using our built-in handler. The official handler is # guaranteed to support the same set of kwargs. try: from streamlit.external.langchain import ( StreamlitCallbackHandler as OfficialStreamlitCallbackHandler, # type: ignore # noqa: 501 ) return OfficialStreamlitCallbackHandler( parent_container, max_thought_containers=max_thought_containers, expand_new_thoughts=expand_new_thoughts, collapse_completed_thoughts=collapse_completed_thoughts, thought_labeler=thought_labeler, ) except ImportError: return _InternalStreamlitCallbackHandler( parent_container, max_thought_containers=max_thought_containers, expand_new_thoughts=expand_new_thoughts, collapse_completed_thoughts=collapse_completed_thoughts, thought_labeler=thought_labeler, )
langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler¶ langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None) → BaseCallbackHandler[source]¶ Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards use with a LangChain Agent; it displays the Agent’s LLM and tool-usage “thoughts” inside a series of Streamlit expanders. Parameters parent_container – The st.container that will contain all the Streamlit elements that the Handler creates. max_thought_containers – The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a “History” expander. Defaults to 4. expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Returns A new StreamlitCallbackHandler instance. Note that this is an “auto-updating” API (if the installed version of Streamlit) has a more recent StreamlitCallbackHandler implementation, an instance of that class will be used.
Construct a new StreamlitCallbackHandler.
3623b7e0-a8d2-408e-9bf0-7bd95a807240
[ "logging", "concurrent.futures.Future", "concurrent.futures.ThreadPoolExecutor", "concurrent.futures.wait", "typing.Any", "typing.Optional", "typing.Sequence", "typing.Set", "typing.Union", "uuid.UUID", "langchainplus_sdk.LangChainPlusClient", "langchainplus_sdk.RunEvaluator", "langchain.callbacks.manager.tracing_v2_enabled", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run" ]
langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html#langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler
class EvaluatorCallbackHandler(BaseTracer): """A tracer that runs a run evaluator whenever a run is persisted. Parameters ---------- evaluators : Sequence[RunEvaluator] The run evaluators to apply to all top level runs. max_workers : int, optional The maximum number of worker threads to use for running the evaluators. If not specified, it will default to the number of evaluators. client : LangChainPlusClient, optional The LangChainPlusClient instance to use for evaluating the runs. If not specified, a new instance will be created. example_id : Union[UUID, str], optional The example ID to be associated with the runs. project_name : str, optional The LangSmith project name to be organize eval chain runs under. Attributes ---------- example_id : Union[UUID, None] The example ID associated with the runs. client : LangChainPlusClient The LangChainPlusClient instance used for evaluating the runs. evaluators : Sequence[RunEvaluator] The sequence of run evaluators to be executed. executor : ThreadPoolExecutor The thread pool executor used for running the evaluators. futures : Set[Future] The set of futures representing the running evaluators. skip_unfinished : bool Whether to skip runs that are not finished or raised an error. project_name : Optional[str] The LangSmith project name to be organize eval chain runs under. """ name = "evaluator_callback_handler" def __init__( self, evaluators: Sequence[RunEvaluator], max_workers: Optional[int] = None, client: Optional[LangChainPlusClient] = None, example_id: Optional[Union[UUID, str]] = None, skip_unfinished: bool = True, project_name: Optional[str] = None, **kwargs: Any, ) -> None: super().__init__(**kwargs) self.example_id = ( UUID(example_id) if isinstance(example_id, str) else example_id ) self.client = client or LangChainPlusClient() self.evaluators = evaluators self.executor = ThreadPoolExecutor( max_workers=max(max_workers or len(evaluators), 1) ) self.futures: Set[Future] = set() self.skip_unfinished = skip_unfinished self.project_name = project_name def _evaluate_in_project(self, run: Run, evaluator: RunEvaluator) -> None: """Evaluate the run in the project. Parameters ---------- run : Run The run to be evaluated. evaluator : RunEvaluator The evaluator to use for evaluating the run. """ try: if self.project_name is None: self.client.evaluate_run(run, evaluator) with tracing_v2_enabled(project_name=self.project_name, tags=["eval"]): self.client.evaluate_run(run, evaluator) except Exception as e: logger.error( f"Error evaluating run {run.id} with " f"{evaluator.__class__.__name__}: {e}", exc_info=True, ) raise e def _persist_run(self, run: Run) -> None: """Run the evaluator on the run. Parameters ---------- run : Run The run to be evaluated. """ if self.skip_unfinished and not run.outputs: logger.debug(f"Skipping unfinished run {run.id}") return run_ = run.copy() run_.reference_example_id = self.example_id for evaluator in self.evaluators: self.futures.add( self.executor.submit(self._evaluate_in_project, run_, evaluator) ) def wait_for_futures(self) -> None: """Wait for all futures to complete.""" futures = list(self.futures) wait(futures) for future in futures: self.futures.remove(future)
langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler¶ class langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler(evaluators: Sequence[RunEvaluator], max_workers: Optional[int] = None, client: Optional[Client] = None, example_id: Optional[Union[UUID, str]] = None, skip_unfinished: bool = True, project_name: Optional[str] = None, **kwargs: Any)[source]¶ Bases: BaseTracer A tracer that runs a run evaluator whenever a run is persisted. Parameters evaluators (Sequence[RunEvaluator]) – The run evaluators to apply to all top level runs. max_workers (int, optional) – The maximum number of worker threads to use for running the evaluators. If not specified, it will default to the number of evaluators. client (LangSmith Client, optional) – The LangSmith client instance to use for evaluating the runs. If not specified, a new instance will be created. example_id (Union[UUID, str], optional) – The example ID to be associated with the runs. project_name (str, optional) – The LangSmith project name to be organize eval chain runs under. example_id¶ The example ID associated with the runs. Type Union[UUID, None] client¶ The LangSmith client instance used for evaluating the runs. Type Client evaluators¶ The sequence of run evaluators to be executed. Type Sequence[RunEvaluator] executor¶ The thread pool executor used for running the evaluators. Type ThreadPoolExecutor futures¶ The set of futures representing the running evaluators. Type Set[Future] skip_unfinished¶ Whether to skip runs that are not finished or raised an error. Type bool project_name¶ The LangSmith project name to be organize eval chain runs under. Type Optional[str] Methods __init__(evaluators[, max_workers, client, ...]) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. wait_for_futures() Wait for all futures to complete. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. wait_for_futures() → None[source]¶ Wait for all futures to complete. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'evaluator_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
A tracer that runs a run evaluator whenever a run is persisted.
26051f93-6ff8-43ff-aabd-597c22934bf9
[ "__future__.annotations", "json", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Tuple", "typing.TypedDict", "typing.Union", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "wandb.Settings", "wandb.sdk.data_types.trace_tree.Span", "wandb.sdk.lib.paths.StrPath", "wandb.wandb_run.Run" ]
langchain.callbacks.tracers.wandb.WandbRunArgs
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbRunArgs.html#langchain.callbacks.tracers.wandb.WandbRunArgs
class WandbRunArgs(TypedDict): """Arguments for the WandbTracer.""" job_type: Optional[str] dir: Optional[StrPath] config: Union[Dict, str, None] project: Optional[str] entity: Optional[str] reinit: Optional[bool] tags: Optional[Sequence] group: Optional[str] name: Optional[str] notes: Optional[str] magic: Optional[Union[dict, str, bool]] config_exclude_keys: Optional[List[str]] config_include_keys: Optional[List[str]] anonymous: Optional[str] mode: Optional[str] allow_val_change: Optional[bool] resume: Optional[Union[bool, str]] force: Optional[bool] tensorboard: Optional[bool] sync_tensorboard: Optional[bool] monitor_gym: Optional[bool] save_code: Optional[bool] id: Optional[str] settings: Union[WBSettings, Dict[str, Any], None]
langchain.callbacks.tracers.wandb.WandbRunArgs¶ class langchain.callbacks.tracers.wandb.WandbRunArgs[source]¶ Bases: TypedDict Arguments for the WandbTracer. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() Attributes job_type dir config project entity reinit tags group name notes magic config_exclude_keys config_include_keys anonymous mode allow_val_change resume force tensorboard sync_tensorboard monitor_gym save_code id settings clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values¶ allow_val_change: Optional[bool]¶ anonymous: Optional[str]¶ config: Union[Dict, str, None]¶ config_exclude_keys: Optional[List[str]]¶ config_include_keys: Optional[List[str]]¶ dir: Optional[StrPath]¶ entity: Optional[str]¶ force: Optional[bool]¶ group: Optional[str]¶ id: Optional[str]¶ job_type: Optional[str]¶ magic: Optional[Union[dict, str, bool]]¶ mode: Optional[str]¶ monitor_gym: Optional[bool]¶ name: Optional[str]¶ notes: Optional[str]¶ project: Optional[str]¶ reinit: Optional[bool]¶ resume: Optional[Union[bool, str]]¶ save_code: Optional[bool]¶ settings: Union[WBSettings, Dict[str, Any], None]¶ sync_tensorboard: Optional[bool]¶ tags: Optional[Sequence]¶ tensorboard: Optional[bool]¶
Arguments for the WandbTracer.
5ba94549-8198-456d-b42e-bcc2b1e56d18
[ "__future__.annotations", "json", "typing.TYPE_CHECKING", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Tuple", "typing.TypedDict", "typing.Union", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "wandb.Settings", "wandb.sdk.data_types.trace_tree.Span", "wandb.sdk.lib.paths.StrPath", "wandb.wandb_run.Run" ]
langchain.callbacks.tracers.wandb.WandbTracer
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.wandb.WandbTracer.html#langchain.callbacks.tracers.wandb.WandbTracer
class WandbTracer(BaseTracer): """Callback Handler that logs to Weights and Biases. This handler will log the model architecture and run traces to Weights and Biases. This will ensure that all LangChain activity is logged to W&B. """ _run: Optional[WBRun] = None _run_args: Optional[WandbRunArgs] = None def __init__(self, run_args: Optional[WandbRunArgs] = None, **kwargs: Any) -> None: """Initializes the WandbTracer. Parameters: run_args: (dict, optional) Arguments to pass to `wandb.init()`. If not provided, `wandb.init()` will be called with no arguments. Please refer to the `wandb.init` for more details. To use W&B to monitor all LangChain activity, add this tracer like any other LangChain callback: ``` from wandb.integration.langchain import WandbTracer tracer = WandbTracer() chain = LLMChain(llm, callbacks=[tracer]) # ...end of notebook / script: tracer.finish() ``` """ super().__init__(**kwargs) try: import wandb from wandb.sdk.data_types import trace_tree except ImportError as e: raise ImportError( "Could not import wandb python package." "Please install it with `pip install -U wandb`." ) from e self._wandb = wandb self._trace_tree = trace_tree self._run_args = run_args self._ensure_run(should_print_url=(wandb.run is None)) self.run_processor = RunProcessor(self._wandb, self._trace_tree) def finish(self) -> None: """Waits for all asynchronous processes to finish and data to upload. Proxy for `wandb.finish()`. """ self._wandb.finish() def _log_trace_from_run(self, run: Run) -> None: """Logs a LangChain Run to W*B as a W&B Trace.""" self._ensure_run() root_span = self.run_processor.process_span(run) model_dict = self.run_processor.process_model(run) if root_span is None: return model_trace = self._trace_tree.WBTraceTree( root_span=root_span, model_dict=model_dict, ) if self._wandb.run is not None: self._wandb.run.log({"langchain_trace": model_trace}) def _ensure_run(self, should_print_url: bool = False) -> None: """Ensures an active W&B run exists. If not, will start a new run with the provided run_args. """ if self._wandb.run is None: # Make a shallow copy of the run args, so we don't modify the original run_args = self._run_args or {} # type: ignore run_args: dict = {**run_args} # type: ignore # Prefer to run in silent mode since W&B has a lot of output # which can be undesirable when dealing with text-based models. if "settings" not in run_args: # type: ignore run_args["settings"] = {"silent": True} # type: ignore # Start the run and add the stream table self._wandb.init(**run_args) if self._wandb.run is not None: if should_print_url: run_url = self._wandb.run.settings.run_url self._wandb.termlog( f"Streaming LangChain activity to W&B at {run_url}\n" "`WandbTracer` is currently in beta.\n" "Please report any issues to " "https://github.com/wandb/wandb/issues with the tag " "`langchain`." ) self._wandb.run._label(repo="langchain") def _persist_run(self, run: "Run") -> None: """Persist a run.""" self._log_trace_from_run(run)
langchain.callbacks.tracers.wandb.WandbTracer¶ class langchain.callbacks.tracers.wandb.WandbTracer(run_args: Optional[WandbRunArgs] = None, **kwargs: Any)[source]¶ Bases: BaseTracer Callback Handler that logs to Weights and Biases. This handler will log the model architecture and run traces to Weights and Biases. This will ensure that all LangChain activity is logged to W&B. Initializes the WandbTracer. Parameters run_args – (dict, optional) Arguments to pass to wandb.init(). If not provided, wandb.init() will be called with no arguments. Please refer to the wandb.init for more details. To use W&B to monitor all LangChain activity, add this tracer like any other LangChain callback: ``` from wandb.integration.langchain import WandbTracer tracer = WandbTracer() chain = LLMChain(llm, callbacks=[tracer]) # …end of notebook / script: tracer.finish() ``` Methods __init__([run_args]) Initializes the WandbTracer. finish() Waits for all asynchronous processes to finish and data to upload. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline finish() → None[source]¶ Waits for all asynchronous processes to finish and data to upload. Proxy for wandb.finish(). on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
Callback Handler that logs to Weights and Biases.
bff5a30f-3c8f-4cb9-96a4-9760813524fb
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.TracerSessionV1Base
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1Base.html#langchain.callbacks.tracers.schemas.TracerSessionV1Base
class TracerSessionV1Base(BaseModel): """Base class for TracerSessionV1.""" start_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow) name: Optional[str] = None extra: Optional[Dict[str, Any]] = None
langchain.callbacks.tracers.schemas.TracerSessionV1Base¶ class langchain.callbacks.tracers.schemas.TracerSessionV1Base(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseModel Base class for TracerSessionV1. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
Base class for TracerSessionV1.
38ea4373-b3eb-448c-8c30-84510eb323ef
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.TracerSessionV1Create
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1Create.html#langchain.callbacks.tracers.schemas.TracerSessionV1Create
class TracerSessionV1Create(TracerSessionV1Base): """Create class for TracerSessionV1."""
langchain.callbacks.tracers.schemas.TracerSessionV1Create¶ class langchain.callbacks.tracers.schemas.TracerSessionV1Create(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]¶ Bases: TracerSessionV1Base Create class for TracerSessionV1. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
Create class for TracerSessionV1.
6bd74e49-d5ea-449c-9333-2a64378150f4
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.TracerSessionV1
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionV1.html#langchain.callbacks.tracers.schemas.TracerSessionV1
class TracerSessionV1(TracerSessionV1Base): """TracerSessionV1 schema.""" id: int
langchain.callbacks.tracers.schemas.TracerSessionV1¶ class langchain.callbacks.tracers.schemas.TracerSessionV1(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, id: int)[source]¶ Bases: TracerSessionV1Base TracerSessionV1 schema. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param id: int [Required]¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
TracerSessionV1 schema.
a91bcc0b-4c10-4a6c-8a3c-b89a63fab54d
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.TracerSessionBase
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSessionBase.html#langchain.callbacks.tracers.schemas.TracerSessionBase
class TracerSessionBase(TracerSessionV1Base): """A creation class for TracerSession.""" tenant_id: UUID
langchain.callbacks.tracers.schemas.TracerSessionBase¶ class langchain.callbacks.tracers.schemas.TracerSessionBase(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID)[source]¶ Bases: TracerSessionV1Base A creation class for TracerSession. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶ param tenant_id: uuid.UUID [Required]¶
A creation class for TracerSession.
ee5e417e-743b-4761-81f3-26b0e05d0859
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.TracerSession
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.TracerSession.html#langchain.callbacks.tracers.schemas.TracerSession
class TracerSession(TracerSessionBase): """TracerSessionV1 schema for the V2 API.""" id: UUID
langchain.callbacks.tracers.schemas.TracerSession¶ class langchain.callbacks.tracers.schemas.TracerSession(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID, id: UUID)[source]¶ Bases: TracerSessionBase TracerSessionV1 schema for the V2 API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param id: uuid.UUID [Required]¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶ param tenant_id: uuid.UUID [Required]¶
TracerSessionV1 schema for the V2 API.
683357de-6fe7-4c79-a75f-591e63d1e668
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.BaseRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.BaseRun.html#langchain.callbacks.tracers.schemas.BaseRun
class BaseRun(BaseModel): """Base class for Run.""" uuid: str parent_uuid: Optional[str] = None start_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow) end_time: datetime.datetime = Field(default_factory=datetime.datetime.utcnow) extra: Optional[Dict[str, Any]] = None execution_order: int child_execution_order: int serialized: Dict[str, Any] session_id: int error: Optional[str] = None
langchain.callbacks.tracers.schemas.BaseRun¶ class langchain.callbacks.tracers.schemas.BaseRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None)[source]¶ Bases: BaseModel Base class for Run. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
Base class for Run.
05045801-aea9-42b3-8dfb-f28c2a400255
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.LLMRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.LLMRun.html#langchain.callbacks.tracers.schemas.LLMRun
class LLMRun(BaseRun): """Class for LLMRun.""" prompts: List[str] response: Optional[LLMResult] = None
langchain.callbacks.tracers.schemas.LLMRun¶ class langchain.callbacks.tracers.schemas.LLMRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, prompts: List[str], response: Optional[LLMResult] = None)[source]¶ Bases: BaseRun Class for LLMRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param prompts: List[str] [Required]¶ param response: Optional[langchain.schema.output.LLMResult] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
Class for LLMRun.
9aeb9165-33c3-4da0-b9e5-560cc997a2c1
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.ChainRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ChainRun.html#langchain.callbacks.tracers.schemas.ChainRun
class ChainRun(BaseRun): """Class for ChainRun.""" inputs: Dict[str, Any] outputs: Optional[Dict[str, Any]] = None child_llm_runs: List[LLMRun] = Field(default_factory=list) child_chain_runs: List[ChainRun] = Field(default_factory=list) child_tool_runs: List[ToolRun] = Field(default_factory=list)
langchain.callbacks.tracers.schemas.ChainRun¶ class langchain.callbacks.tracers.schemas.ChainRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, inputs: Dict[str, Any], outputs: Optional[Dict[str, Any]] = None, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]¶ Bases: BaseRun Class for ChainRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]¶ param child_execution_order: int [Required]¶ param child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]¶ param child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param inputs: Dict[str, Any] [Required]¶ param outputs: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
Class for ChainRun.
2074a539-0666-463a-b000-2e9b987cf8b3
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.ToolRun
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.ToolRun.html#langchain.callbacks.tracers.schemas.ToolRun
class ToolRun(BaseRun): """Class for ToolRun.""" tool_input: str output: Optional[str] = None action: str child_llm_runs: List[LLMRun] = Field(default_factory=list) child_chain_runs: List[ChainRun] = Field(default_factory=list) child_tool_runs: List[ToolRun] = Field(default_factory=list)
langchain.callbacks.tracers.schemas.ToolRun¶ class langchain.callbacks.tracers.schemas.ToolRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, tool_input: str, output: Optional[str] = None, action: str, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]¶ Bases: BaseRun Class for ToolRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param action: str [Required]¶ param child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]¶ param child_execution_order: int [Required]¶ param child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]¶ param child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param output: Optional[str] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param tool_input: str [Required]¶ param uuid: str [Required]¶
Class for ToolRun.
3f1ceaf6-df5e-46fd-a378-dfbb94592852
[ "__future__.annotations", "datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "uuid.UUID", "langchainplus_sdk.schemas.RunBase", "langchainplus_sdk.schemas.RunTypeEnum", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.schema.LLMResult" ]
langchain.callbacks.tracers.schemas.Run
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.schemas.Run.html#langchain.callbacks.tracers.schemas.Run
class Run(BaseRunV2): """Run schema for the V2 API in the Tracer.""" execution_order: int child_execution_order: int child_runs: List[Run] = Field(default_factory=list) tags: Optional[List[str]] = Field(default_factory=list) @root_validator(pre=True) def assign_name(cls, values: dict) -> dict: """Assign name to the run.""" if values.get("name") is None: if "name" in values["serialized"]: values["name"] = values["serialized"]["name"] elif "id" in values["serialized"]: values["name"] = values["serialized"]["id"][-1] return values
langchain.callbacks.tracers.schemas.Run¶ class langchain.callbacks.tracers.schemas.Run(*, id: UUID, name: str, start_time: datetime, run_type: Union[RunTypeEnum, str], end_time: Optional[datetime] = None, extra: Optional[dict] = None, error: Optional[str] = None, serialized: Optional[dict] = None, events: Optional[List[Dict]] = None, inputs: dict, outputs: Optional[dict] = None, reference_example_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, execution_order: int, child_execution_order: int, child_runs: List[Run] = None)[source]¶ Bases: RunBase Run schema for the V2 API in the Tracer. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param child_runs: List[Run] [Optional]¶ param end_time: Optional[datetime] = None¶ param error: Optional[str] = None¶ param events: Optional[List[Dict]] = None¶ param execution_order: int [Required]¶ param extra: Optional[dict] = None¶ param id: UUID [Required]¶ param inputs: dict [Required]¶ param name: str [Required]¶ param outputs: Optional[dict] = None¶ param parent_run_id: Optional[UUID] = None¶ param reference_example_id: Optional[UUID] = None¶ param run_type: Union[RunTypeEnum, str] [Required]¶ param serialized: Optional[dict] = None¶ param start_time: datetime [Required]¶ param tags: Optional[List[str]] [Optional]¶ validator assign_name  »  all fields[source]¶ Assign name to the run.
Run schema for the V2 API in the Tracer.
c392e4e0-8504-45d6-a160-6fb35d67d00c
[ "__future__.annotations", "logging", "os", "typing.Any", "typing.Dict", "typing.Optional", "typing.Union", "requests", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.ChainRun", "langchain.callbacks.tracers.schemas.LLMRun", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.ToolRun", "langchain.callbacks.tracers.schemas.TracerSession", "langchain.callbacks.tracers.schemas.TracerSessionV1", "langchain.callbacks.tracers.schemas.TracerSessionV1Base", "langchain.schema.messages.get_buffer_string", "langchain.utils.raise_for_status_with_text" ]
langchain.callbacks.tracers.langchain_v1.get_headers
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.get_headers.html#langchain.callbacks.tracers.langchain_v1.get_headers
def get_headers() -> Dict[str, Any]: """Get the headers for the LangChain API.""" headers: Dict[str, Any] = {"Content-Type": "application/json"} if os.getenv("LANGCHAIN_API_KEY"): headers["x-api-key"] = os.getenv("LANGCHAIN_API_KEY") return headers
langchain.callbacks.tracers.langchain_v1.get_headers¶ langchain.callbacks.tracers.langchain_v1.get_headers() → Dict[str, Any][source]¶ Get the headers for the LangChain API.
Get the headers for the LangChain API.
bc3d8c3d-d713-4435-a561-df954f682a31
[ "__future__.annotations", "logging", "os", "typing.Any", "typing.Dict", "typing.Optional", "typing.Union", "requests", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.ChainRun", "langchain.callbacks.tracers.schemas.LLMRun", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.ToolRun", "langchain.callbacks.tracers.schemas.TracerSession", "langchain.callbacks.tracers.schemas.TracerSessionV1", "langchain.callbacks.tracers.schemas.TracerSessionV1Base", "langchain.schema.messages.get_buffer_string", "langchain.utils.raise_for_status_with_text" ]
langchain.callbacks.tracers.langchain_v1.LangChainTracerV1
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html#langchain.callbacks.tracers.langchain_v1.LangChainTracerV1
class LangChainTracerV1(BaseTracer): """An implementation of the SharedTracer that POSTS to the langchain endpoint.""" def __init__(self, **kwargs: Any) -> None: """Initialize the LangChain tracer.""" super().__init__(**kwargs) self.session: Optional[TracerSessionV1] = None self._endpoint = _get_endpoint() self._headers = get_headers() def _convert_to_v1_run(self, run: Run) -> Union[LLMRun, ChainRun, ToolRun]: session = self.session or self.load_default_session() if not isinstance(session, TracerSessionV1): raise ValueError( "LangChainTracerV1 is not compatible with" f" session of type {type(session)}" ) if run.run_type == "llm": if "prompts" in run.inputs: prompts = run.inputs["prompts"] elif "messages" in run.inputs: prompts = [get_buffer_string(batch) for batch in run.inputs["messages"]] else: raise ValueError("No prompts found in LLM run inputs") return LLMRun( uuid=str(run.id) if run.id else None, parent_uuid=str(run.parent_run_id) if run.parent_run_id else None, start_time=run.start_time, end_time=run.end_time, extra=run.extra, execution_order=run.execution_order, child_execution_order=run.child_execution_order, serialized=run.serialized, session_id=session.id, error=run.error, prompts=prompts, response=run.outputs if run.outputs else None, ) if run.run_type == "chain": child_runs = [self._convert_to_v1_run(run) for run in run.child_runs] return ChainRun( uuid=str(run.id) if run.id else None, parent_uuid=str(run.parent_run_id) if run.parent_run_id else None, start_time=run.start_time, end_time=run.end_time, execution_order=run.execution_order, child_execution_order=run.child_execution_order, serialized=run.serialized, session_id=session.id, inputs=run.inputs, outputs=run.outputs, error=run.error, extra=run.extra, child_llm_runs=[run for run in child_runs if isinstance(run, LLMRun)], child_chain_runs=[ run for run in child_runs if isinstance(run, ChainRun) ], child_tool_runs=[run for run in child_runs if isinstance(run, ToolRun)], ) if run.run_type == "tool": child_runs = [self._convert_to_v1_run(run) for run in run.child_runs] return ToolRun( uuid=str(run.id) if run.id else None, parent_uuid=str(run.parent_run_id) if run.parent_run_id else None, start_time=run.start_time, end_time=run.end_time, execution_order=run.execution_order, child_execution_order=run.child_execution_order, serialized=run.serialized, session_id=session.id, action=str(run.serialized), tool_input=run.inputs.get("input", ""), output=None if run.outputs is None else run.outputs.get("output"), error=run.error, extra=run.extra, child_chain_runs=[ run for run in child_runs if isinstance(run, ChainRun) ], child_tool_runs=[run for run in child_runs if isinstance(run, ToolRun)], child_llm_runs=[run for run in child_runs if isinstance(run, LLMRun)], ) raise ValueError(f"Unknown run type: {run.run_type}") def _persist_run(self, run: Union[Run, LLMRun, ChainRun, ToolRun]) -> None: """Persist a run.""" if isinstance(run, Run): v1_run = self._convert_to_v1_run(run) else: v1_run = run if isinstance(v1_run, LLMRun): endpoint = f"{self._endpoint}/llm-runs" elif isinstance(v1_run, ChainRun): endpoint = f"{self._endpoint}/chain-runs" else: endpoint = f"{self._endpoint}/tool-runs" try: response = requests.post( endpoint, data=v1_run.json(), headers=self._headers, ) raise_for_status_with_text(response) except Exception as e: logging.warning(f"Failed to persist run: {e}") def _persist_session( self, session_create: TracerSessionV1Base ) -> Union[TracerSessionV1, TracerSession]: """Persist a session.""" try: r = requests.post( f"{self._endpoint}/sessions", data=session_create.json(), headers=self._headers, ) session = TracerSessionV1(id=r.json()["id"], **session_create.dict()) except Exception as e: logging.warning(f"Failed to create session, using default session: {e}") session = TracerSessionV1(id=1, **session_create.dict()) return session def _load_session(self, session_name: Optional[str] = None) -> TracerSessionV1: """Load a session from the tracer.""" try: url = f"{self._endpoint}/sessions" if session_name: url += f"?name={session_name}" r = requests.get(url, headers=self._headers) tracer_session = TracerSessionV1(**r.json()[0]) except Exception as e: session_type = "default" if not session_name else session_name logging.warning( f"Failed to load {session_type} session, using empty session: {e}" ) tracer_session = TracerSessionV1(id=1) self.session = tracer_session return tracer_session def load_session(self, session_name: str) -> Union[TracerSessionV1, TracerSession]: """Load a session with the given name from the tracer.""" return self._load_session(session_name) def load_default_session(self) -> Union[TracerSessionV1, TracerSession]: """Load the default tracing session and set it as the Tracer's session.""" return self._load_session("default")
langchain.callbacks.tracers.langchain_v1.LangChainTracerV1¶ class langchain.callbacks.tracers.langchain_v1.LangChainTracerV1(**kwargs: Any)[source]¶ Bases: BaseTracer An implementation of the SharedTracer that POSTS to the langchain endpoint. Initialize the LangChain tracer. Methods __init__(**kwargs) Initialize the LangChain tracer. load_default_session() Load the default tracing session and set it as the Tracer's session. load_session(session_name) Load a session with the given name from the tracer. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline load_default_session() → Union[TracerSessionV1, TracerSession][source]¶ Load the default tracing session and set it as the Tracer’s session. load_session(session_name: str) → Union[TracerSessionV1, TracerSession][source]¶ Load a session with the given name from the tracer. on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, langchain.callbacks.tracers.schemas.Run]¶
An implementation of the SharedTracer that POSTS to the langchain endpoint.
16246f5e-4fa9-4f95-9674-dcdc5e62b7dd
[ "json", "typing.Any", "typing.Callable", "typing.List", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.input.get_bolded_text", "langchain.input.get_colored_text" ]
langchain.callbacks.tracers.stdout.try_json_stringify
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.try_json_stringify.html#langchain.callbacks.tracers.stdout.try_json_stringify
def try_json_stringify(obj: Any, fallback: str) -> str: """ Try to stringify an object to JSON. Args: obj: Object to stringify. fallback: Fallback string to return if the object cannot be stringified. Returns: A JSON string if the object can be stringified, otherwise the fallback string. """ try: return json.dumps(obj, indent=2, ensure_ascii=False) except Exception: return fallback
langchain.callbacks.tracers.stdout.try_json_stringify¶ langchain.callbacks.tracers.stdout.try_json_stringify(obj: Any, fallback: str) → str[source]¶ Try to stringify an object to JSON. :param obj: Object to stringify. :param fallback: Fallback string to return if the object cannot be stringified. Returns A JSON string if the object can be stringified, otherwise the fallback string.
Try to stringify an object to JSON.
11814734-a6c3-47b0-acbc-4c0749c385c6
[ "json", "typing.Any", "typing.Callable", "typing.List", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.input.get_bolded_text", "langchain.input.get_colored_text" ]
langchain.callbacks.tracers.stdout.elapsed
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.elapsed.html#langchain.callbacks.tracers.stdout.elapsed
def elapsed(run: Any) -> str: """Get the elapsed time of a run. Args: run: any object with a start_time and end_time attribute. Returns: A string with the elapsed time in seconds or milliseconds if time is less than a second. """ elapsed_time = run.end_time - run.start_time milliseconds = elapsed_time.total_seconds() * 1000 if milliseconds < 1000: return f"{milliseconds}ms" return f"{(milliseconds / 1000):.2f}s"
langchain.callbacks.tracers.stdout.elapsed¶ langchain.callbacks.tracers.stdout.elapsed(run: Any) → str[source]¶ Get the elapsed time of a run. Parameters run – any object with a start_time and end_time attribute. Returns A string with the elapsed time in seconds ormilliseconds if time is less than a second.
Get the elapsed time of a run.
8caac097-336f-4475-9bbb-96779b901374
[ "json", "typing.Any", "typing.Callable", "typing.List", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.input.get_bolded_text", "langchain.input.get_colored_text" ]
langchain.callbacks.tracers.stdout.FunctionCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.FunctionCallbackHandler.html#langchain.callbacks.tracers.stdout.FunctionCallbackHandler
class FunctionCallbackHandler(BaseTracer): """Tracer that calls a function with a single str parameter.""" name = "function_callback_handler" def __init__(self, function: Callable[[str], None], **kwargs: Any) -> None: super().__init__(**kwargs) self.function_callback = function def _persist_run(self, run: Run) -> None: pass def get_parents(self, run: Run) -> List[Run]: parents = [] current_run = run while current_run.parent_run_id: parent = self.run_map.get(str(current_run.parent_run_id)) if parent: parents.append(parent) current_run = parent else: break return parents def get_breadcrumbs(self, run: Run) -> str: parents = self.get_parents(run)[::-1] string = " > ".join( f"{parent.execution_order}:{parent.run_type}:{parent.name}" if i != len(parents) - 1 else f"{parent.execution_order}:{parent.run_type}:{parent.name}" for i, parent in enumerate(parents + [run]) ) return string # logging methods def _on_chain_start(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[chain/start]', color='green')} " + get_bolded_text(f"[{crumbs}] Entering Chain run with input:\n") + f"{try_json_stringify(run.inputs, '[inputs]')}" ) def _on_chain_end(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[chain/end]', color='blue')} " + get_bolded_text( f"[{crumbs}] [{elapsed(run)}] Exiting Chain run with output:\n" ) + f"{try_json_stringify(run.outputs, '[outputs]')}" ) def _on_chain_error(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[chain/error]', color='red')} " + get_bolded_text( f"[{crumbs}] [{elapsed(run)}] Chain run errored with error:\n" ) + f"{try_json_stringify(run.error, '[error]')}" ) def _on_llm_start(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) inputs = ( {"prompts": [p.strip() for p in run.inputs["prompts"]]} if "prompts" in run.inputs else run.inputs ) self.function_callback( f"{get_colored_text('[llm/start]', color='green')} " + get_bolded_text(f"[{crumbs}] Entering LLM run with input:\n") + f"{try_json_stringify(inputs, '[inputs]')}" ) def _on_llm_end(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[llm/end]', color='blue')} " + get_bolded_text( f"[{crumbs}] [{elapsed(run)}] Exiting LLM run with output:\n" ) + f"{try_json_stringify(run.outputs, '[response]')}" ) def _on_llm_error(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[llm/error]', color='red')} " + get_bolded_text( f"[{crumbs}] [{elapsed(run)}] LLM run errored with error:\n" ) + f"{try_json_stringify(run.error, '[error]')}" ) def _on_tool_start(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f'{get_colored_text("[tool/start]", color="green")} ' + get_bolded_text(f"[{crumbs}] Entering Tool run with input:\n") + f'"{run.inputs["input"].strip()}"' ) def _on_tool_end(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) if run.outputs: self.function_callback( f'{get_colored_text("[tool/end]", color="blue")} ' + get_bolded_text( f"[{crumbs}] [{elapsed(run)}] Exiting Tool run with output:\n" ) + f'"{run.outputs["output"].strip()}"' ) def _on_tool_error(self, run: Run) -> None: crumbs = self.get_breadcrumbs(run) self.function_callback( f"{get_colored_text('[tool/error]', color='red')} " + get_bolded_text(f"[{crumbs}] [{elapsed(run)}] ") + f"Tool run errored with error:\n" f"{run.error}" )
langchain.callbacks.tracers.stdout.FunctionCallbackHandler¶ class langchain.callbacks.tracers.stdout.FunctionCallbackHandler(function: Callable[[str], None], **kwargs: Any)[source]¶ Bases: BaseTracer Tracer that calls a function with a single str parameter. Methods __init__(function, **kwargs) get_breadcrumbs(run) get_parents(run) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline get_breadcrumbs(run: Run) → str[source]¶ get_parents(run: Run) → List[Run][source]¶ on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'function_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
Tracer that calls a function with a single str parameter.
be8f38bd-9500-405a-9842-3b44a5e16d8b
[ "json", "typing.Any", "typing.Callable", "typing.List", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.input.get_bolded_text", "langchain.input.get_colored_text" ]
langchain.callbacks.tracers.stdout.ConsoleCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html#langchain.callbacks.tracers.stdout.ConsoleCallbackHandler
class ConsoleCallbackHandler(FunctionCallbackHandler): """Tracer that prints to the console.""" name = "console_callback_handler" def __init__(self, **kwargs: Any) -> None: super().__init__(function=print, **kwargs)
langchain.callbacks.tracers.stdout.ConsoleCallbackHandler¶ class langchain.callbacks.tracers.stdout.ConsoleCallbackHandler(**kwargs: Any)[source]¶ Bases: FunctionCallbackHandler Tracer that prints to the console. Methods __init__(**kwargs) get_breadcrumbs(run) get_parents(run) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline get_breadcrumbs(run: Run) → str¶ get_parents(run: Run) → List[Run]¶ on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'console_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
Tracer that prints to the console.
8e06f7b6-98eb-43b0-878d-c4077d815d22
[ "typing.Any", "typing.List", "typing.Optional", "typing.Union", "uuid.UUID", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run" ]
langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html#langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler
class RunCollectorCallbackHandler(BaseTracer): """ A tracer that collects all nested runs in a list. This tracer is useful for inspection and evaluation purposes. Parameters ---------- example_id : Optional[Union[UUID, str]], default=None The ID of the example being traced. It can be either a UUID or a string. """ name = "run-collector_callback_handler" def __init__( self, example_id: Optional[Union[UUID, str]] = None, **kwargs: Any ) -> None: """ Initialize the RunCollectorCallbackHandler. Parameters ---------- example_id : Optional[Union[UUID, str]], default=None The ID of the example being traced. It can be either a UUID or a string. """ super().__init__(**kwargs) self.example_id = ( UUID(example_id) if isinstance(example_id, str) else example_id ) self.traced_runs: List[Run] = [] def _persist_run(self, run: Run) -> None: """ Persist a run by adding it to the traced_runs list. Parameters ---------- run : Run The run to be persisted. """ run_ = run.copy() run_.reference_example_id = self.example_id self.traced_runs.append(run_)
langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler¶ class langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler(example_id: Optional[Union[UUID, str]] = None, **kwargs: Any)[source]¶ Bases: BaseTracer A tracer that collects all nested runs in a list. This tracer is useful for inspection and evaluation purposes. Parameters example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string. Initialize the RunCollectorCallbackHandler. Parameters example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string. Methods __init__([example_id]) Initialize the RunCollectorCallbackHandler. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'run-collector_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
A tracer that collects all nested runs in a list.
db2b9fa1-7dea-4de9-a108-56dc45cc3501
[ "__future__.annotations", "logging", "os", "concurrent.futures.Future", "concurrent.futures.ThreadPoolExecutor", "concurrent.futures.wait", "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Set", "typing.Union", "uuid.UUID", "langchainplus_sdk.LangChainPlusClient", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "langchain.callbacks.tracers.schemas.TracerSession", "langchain.env.get_runtime_environment", "langchain.load.dump.dumpd", "langchain.schema.messages.BaseMessage" ]
langchain.callbacks.tracers.langchain.log_error_once
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.log_error_once.html#langchain.callbacks.tracers.langchain.log_error_once
def log_error_once(method: str, exception: Exception) -> None: """Log an error once.""" global _LOGGED if (method, type(exception)) in _LOGGED: return _LOGGED.add((method, type(exception))) logger.error(exception)
langchain.callbacks.tracers.langchain.log_error_once¶ langchain.callbacks.tracers.langchain.log_error_once(method: str, exception: Exception) → None[source]¶ Log an error once.
Log an error once.
88151b8b-d600-4a80-9125-bb4a14f328ac
[ "__future__.annotations", "logging", "os", "concurrent.futures.Future", "concurrent.futures.ThreadPoolExecutor", "concurrent.futures.wait", "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Set", "typing.Union", "uuid.UUID", "langchainplus_sdk.LangChainPlusClient", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "langchain.callbacks.tracers.schemas.TracerSession", "langchain.env.get_runtime_environment", "langchain.load.dump.dumpd", "langchain.schema.messages.BaseMessage" ]
langchain.callbacks.tracers.langchain.wait_for_all_tracers
Function
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.wait_for_all_tracers.html#langchain.callbacks.tracers.langchain.wait_for_all_tracers
def wait_for_all_tracers() -> None: """Wait for all tracers to finish.""" global _TRACERS for tracer in _TRACERS: tracer.wait_for_futures()
langchain.callbacks.tracers.langchain.wait_for_all_tracers¶ langchain.callbacks.tracers.langchain.wait_for_all_tracers() → None[source]¶ Wait for all tracers to finish.
Wait for all tracers to finish.
83cf6de8-dcd7-437f-bdfb-01db51ef58e6
[ "__future__.annotations", "logging", "os", "concurrent.futures.Future", "concurrent.futures.ThreadPoolExecutor", "concurrent.futures.wait", "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Set", "typing.Union", "uuid.UUID", "langchainplus_sdk.LangChainPlusClient", "langchain.callbacks.tracers.base.BaseTracer", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "langchain.callbacks.tracers.schemas.TracerSession", "langchain.env.get_runtime_environment", "langchain.load.dump.dumpd", "langchain.schema.messages.BaseMessage" ]
langchain.callbacks.tracers.langchain.LangChainTracer
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.langchain.LangChainTracer.html#langchain.callbacks.tracers.langchain.LangChainTracer
class LangChainTracer(BaseTracer): """An implementation of the SharedTracer that POSTS to the langchain endpoint.""" def __init__( self, example_id: Optional[Union[UUID, str]] = None, project_name: Optional[str] = None, client: Optional[LangChainPlusClient] = None, tags: Optional[List[str]] = None, **kwargs: Any, ) -> None: """Initialize the LangChain tracer.""" super().__init__(**kwargs) self.session: Optional[TracerSession] = None self.example_id = ( UUID(example_id) if isinstance(example_id, str) else example_id ) self.project_name = project_name or os.getenv( "LANGCHAIN_PROJECT", os.getenv("LANGCHAIN_SESSION", "default") ) # set max_workers to 1 to process tasks in order self.executor = ThreadPoolExecutor(max_workers=1) self.client = client or LangChainPlusClient() self._futures: Set[Future] = set() self.tags = tags or [] global _TRACERS _TRACERS.append(self) def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> None: """Start a trace for an LLM run.""" parent_run_id_ = str(parent_run_id) if parent_run_id else None execution_order = self._get_execution_order(parent_run_id_) start_time = datetime.utcnow() if metadata: kwargs.update({"metadata": metadata}) chat_model_run = Run( id=run_id, parent_run_id=parent_run_id, serialized=serialized, inputs={"messages": [[dumpd(msg) for msg in batch] for batch in messages]}, extra=kwargs, events=[{"name": "start", "time": start_time}], start_time=start_time, execution_order=execution_order, child_execution_order=execution_order, run_type=RunTypeEnum.llm, tags=tags, ) self._start_trace(chat_model_run) self._on_chat_model_start(chat_model_run) def _persist_run(self, run: Run) -> None: """The Langchain Tracer uses Post/Patch rather than persist.""" def _get_tags(self, run: Run) -> List[str]: """Get combined tags for a run.""" tags = set(run.tags or []) tags.update(self.tags or []) return list(tags) def _persist_run_single(self, run: Run) -> None: """Persist a run.""" run_dict = run.dict(exclude={"child_runs"}) run_dict["tags"] = self._get_tags(run) extra = run_dict.get("extra", {}) extra["runtime"] = get_runtime_environment() run_dict["extra"] = extra try: self.client.create_run(**run_dict, project_name=self.project_name) except Exception as e: # Errors are swallowed by the thread executor so we need to log them here log_error_once("post", e) raise def _update_run_single(self, run: Run) -> None: """Update a run.""" try: run_dict = run.dict() run_dict["tags"] = self._get_tags(run) self.client.update_run(run.id, **run_dict) except Exception as e: # Errors are swallowed by the thread executor so we need to log them here log_error_once("patch", e) raise def _on_llm_start(self, run: Run) -> None: """Persist an LLM run.""" if run.parent_run_id is None: run.reference_example_id = self.example_id self._futures.add( self.executor.submit(self._persist_run_single, run.copy(deep=True)) ) def _on_chat_model_start(self, run: Run) -> None: """Persist an LLM run.""" if run.parent_run_id is None: run.reference_example_id = self.example_id self._futures.add( self.executor.submit(self._persist_run_single, run.copy(deep=True)) ) def _on_llm_end(self, run: Run) -> None: """Process the LLM Run.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_llm_error(self, run: Run) -> None: """Process the LLM Run upon error.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_chain_start(self, run: Run) -> None: """Process the Chain Run upon start.""" if run.parent_run_id is None: run.reference_example_id = self.example_id self._futures.add( self.executor.submit(self._persist_run_single, run.copy(deep=True)) ) def _on_chain_end(self, run: Run) -> None: """Process the Chain Run.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_chain_error(self, run: Run) -> None: """Process the Chain Run upon error.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_tool_start(self, run: Run) -> None: """Process the Tool Run upon start.""" if run.parent_run_id is None: run.reference_example_id = self.example_id self._futures.add( self.executor.submit(self._persist_run_single, run.copy(deep=True)) ) def _on_tool_end(self, run: Run) -> None: """Process the Tool Run.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_tool_error(self, run: Run) -> None: """Process the Tool Run upon error.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_retriever_start(self, run: Run) -> None: """Process the Retriever Run upon start.""" if run.parent_run_id is None: run.reference_example_id = self.example_id self._futures.add( self.executor.submit(self._persist_run_single, run.copy(deep=True)) ) def _on_retriever_end(self, run: Run) -> None: """Process the Retriever Run.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def _on_retriever_error(self, run: Run) -> None: """Process the Retriever Run upon error.""" self._futures.add( self.executor.submit(self._update_run_single, run.copy(deep=True)) ) def wait_for_futures(self) -> None: """Wait for the given futures to complete.""" futures = list(self._futures) wait(futures) for future in futures: self._futures.remove(future)
langchain.callbacks.tracers.langchain.LangChainTracer¶ class langchain.callbacks.tracers.langchain.LangChainTracer(example_id: Optional[Union[UUID, str]] = None, project_name: Optional[str] = None, client: Optional[Client] = None, tags: Optional[List[str]] = None, **kwargs: Any)[source]¶ Bases: BaseTracer An implementation of the SharedTracer that POSTS to the langchain endpoint. Initialize the LangChain tracer. Methods __init__([example_id, project_name, client, ...]) Initialize the LangChain tracer. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Start a trace for an LLM run. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. wait_for_futures() Wait for the given futures to complete. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for an LLM run. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. wait_for_futures() → None[source]¶ Wait for the given futures to complete. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, langchain.callbacks.tracers.schemas.Run]¶
An implementation of the SharedTracer that POSTS to the langchain endpoint.
14812019-12e7-4165-85fc-250026755ca5
[ "__future__.annotations", "logging", "abc.ABC", "abc.abstractmethod", "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Union", "typing.cast", "uuid.UUID", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "langchain.load.dump.dumpd", "langchain.schema.document.Document", "langchain.schema.output.ChatGeneration", "langchain.schema.output.LLMResult" ]
langchain.callbacks.tracers.base.TracerException
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.TracerException.html#langchain.callbacks.tracers.base.TracerException
class TracerException(Exception): """Base class for exceptions in tracers module."""
langchain.callbacks.tracers.base.TracerException¶ class langchain.callbacks.tracers.base.TracerException[source]¶ Bases: Exception Base class for exceptions in tracers module. add_note()¶ Exception.add_note(note) – add a note to the exception with_traceback()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. args¶
Base class for exceptions in tracers module.
0fdd4bb5-7b6e-4821-a8e0-5f3db432cdfd
[ "__future__.annotations", "logging", "abc.ABC", "abc.abstractmethod", "datetime.datetime", "typing.Any", "typing.Dict", "typing.List", "typing.Optional", "typing.Sequence", "typing.Union", "typing.cast", "uuid.UUID", "langchain.callbacks.base.BaseCallbackHandler", "langchain.callbacks.tracers.schemas.Run", "langchain.callbacks.tracers.schemas.RunTypeEnum", "langchain.load.dump.dumpd", "langchain.schema.document.Document", "langchain.schema.output.ChatGeneration", "langchain.schema.output.LLMResult" ]
langchain.callbacks.tracers.base.BaseTracer
Class
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.base.BaseTracer.html#langchain.callbacks.tracers.base.BaseTracer
class BaseTracer(BaseCallbackHandler, ABC): """Base interface for tracers.""" def __init__(self, **kwargs: Any) -> None: super().__init__(**kwargs) self.run_map: Dict[str, Run] = {} @staticmethod def _add_child_run( parent_run: Run, child_run: Run, ) -> None: """Add child run to a chain run or tool run.""" parent_run.child_runs.append(child_run) @abstractmethod def _persist_run(self, run: Run) -> None: """Persist a run.""" def _start_trace(self, run: Run) -> None: """Start a trace for a run.""" if run.parent_run_id: parent_run = self.run_map[str(run.parent_run_id)] if parent_run: self._add_child_run(parent_run, run) else: logger.debug(f"Parent run with UUID {run.parent_run_id} not found.") self.run_map[str(run.id)] = run def _end_trace(self, run: Run) -> None: """End a trace for a run.""" if not run.parent_run_id: self._persist_run(run) else: parent_run = self.run_map.get(str(run.parent_run_id)) if parent_run is None: logger.debug(f"Parent run with UUID {run.parent_run_id} not found.") elif ( run.child_execution_order is not None and parent_run.child_execution_order is not None and run.child_execution_order > parent_run.child_execution_order ): parent_run.child_execution_order = run.child_execution_order self.run_map.pop(str(run.id)) def _get_execution_order(self, parent_run_id: Optional[str] = None) -> int: """Get the execution order for a run.""" if parent_run_id is None: return 1 parent_run = self.run_map.get(parent_run_id) if parent_run is None: logger.debug(f"Parent run with UUID {parent_run_id} not found.") return 1 if parent_run.child_execution_order is None: raise TracerException( f"Parent run with UUID {parent_run_id} has no child execution order." ) return parent_run.child_execution_order + 1 def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> None: """Start a trace for an LLM run.""" parent_run_id_ = str(parent_run_id) if parent_run_id else None execution_order = self._get_execution_order(parent_run_id_) start_time = datetime.utcnow() if metadata: kwargs.update({"metadata": metadata}) llm_run = Run( id=run_id, parent_run_id=parent_run_id, serialized=serialized, inputs={"prompts": prompts}, extra=kwargs, events=[{"name": "start", "time": start_time}], start_time=start_time, execution_order=execution_order, child_execution_order=execution_order, run_type=RunTypeEnum.llm, tags=tags or [], ) self._start_trace(llm_run) self._on_llm_start(llm_run) def on_llm_new_token( self, token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> None: """Run on new LLM token. Only available when streaming is enabled.""" if not run_id: raise TracerException("No run_id provided for on_llm_new_token callback.") run_id_ = str(run_id) llm_run = self.run_map.get(run_id_) if llm_run is None or llm_run.run_type != RunTypeEnum.llm: raise TracerException("No LLM Run found to be traced") llm_run.events.append( { "name": "new_token", "time": datetime.utcnow(), "kwargs": {"token": token}, }, ) def on_llm_end(self, response: LLMResult, *, run_id: UUID, **kwargs: Any) -> None: """End a trace for an LLM run.""" if not run_id: raise TracerException("No run_id provided for on_llm_end callback.") run_id_ = str(run_id) llm_run = self.run_map.get(run_id_) if llm_run is None or llm_run.run_type != RunTypeEnum.llm: raise TracerException("No LLM Run found to be traced") llm_run.outputs = response.dict() for i, generations in enumerate(response.generations): for j, generation in enumerate(generations): output_generation = llm_run.outputs["generations"][i][j] if "message" in output_generation: output_generation["message"] = dumpd( cast(ChatGeneration, generation).message ) llm_run.end_time = datetime.utcnow() llm_run.events.append({"name": "end", "time": llm_run.end_time}) self._end_trace(llm_run) self._on_llm_end(llm_run) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any, ) -> None: """Handle an error for an LLM run.""" if not run_id: raise TracerException("No run_id provided for on_llm_error callback.") run_id_ = str(run_id) llm_run = self.run_map.get(run_id_) if llm_run is None or llm_run.run_type != RunTypeEnum.llm: raise TracerException("No LLM Run found to be traced") llm_run.error = repr(error) llm_run.end_time = datetime.utcnow() llm_run.events.append({"name": "error", "time": llm_run.end_time}) self._end_trace(llm_run) self._on_chain_error(llm_run) def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> None: """Start a trace for a chain run.""" parent_run_id_ = str(parent_run_id) if parent_run_id else None execution_order = self._get_execution_order(parent_run_id_) start_time = datetime.utcnow() if metadata: kwargs.update({"metadata": metadata}) chain_run = Run( id=run_id, parent_run_id=parent_run_id, serialized=serialized, inputs=inputs, extra=kwargs, events=[{"name": "start", "time": start_time}], start_time=start_time, execution_order=execution_order, child_execution_order=execution_order, child_runs=[], run_type=RunTypeEnum.chain, tags=tags or [], ) self._start_trace(chain_run) self._on_chain_start(chain_run) def on_chain_end( self, outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any ) -> None: """End a trace for a chain run.""" if not run_id: raise TracerException("No run_id provided for on_chain_end callback.") chain_run = self.run_map.get(str(run_id)) if chain_run is None or chain_run.run_type != RunTypeEnum.chain: raise TracerException("No chain Run found to be traced") chain_run.outputs = outputs chain_run.end_time = datetime.utcnow() chain_run.events.append({"name": "end", "time": chain_run.end_time}) self._end_trace(chain_run) self._on_chain_end(chain_run) def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any, ) -> None: """Handle an error for a chain run.""" if not run_id: raise TracerException("No run_id provided for on_chain_error callback.") chain_run = self.run_map.get(str(run_id)) if chain_run is None or chain_run.run_type != RunTypeEnum.chain: raise TracerException("No chain Run found to be traced") chain_run.error = repr(error) chain_run.end_time = datetime.utcnow() chain_run.events.append({"name": "error", "time": chain_run.end_time}) self._end_trace(chain_run) self._on_chain_error(chain_run) def on_tool_start( self, serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> None: """Start a trace for a tool run.""" parent_run_id_ = str(parent_run_id) if parent_run_id else None execution_order = self._get_execution_order(parent_run_id_) start_time = datetime.utcnow() if metadata: kwargs.update({"metadata": metadata}) tool_run = Run( id=run_id, parent_run_id=parent_run_id, serialized=serialized, inputs={"input": input_str}, extra=kwargs, events=[{"name": "start", "time": start_time}], start_time=start_time, execution_order=execution_order, child_execution_order=execution_order, child_runs=[], run_type=RunTypeEnum.tool, tags=tags or [], ) self._start_trace(tool_run) self._on_tool_start(tool_run) def on_tool_end(self, output: str, *, run_id: UUID, **kwargs: Any) -> None: """End a trace for a tool run.""" if not run_id: raise TracerException("No run_id provided for on_tool_end callback.") tool_run = self.run_map.get(str(run_id)) if tool_run is None or tool_run.run_type != RunTypeEnum.tool: raise TracerException("No tool Run found to be traced") tool_run.outputs = {"output": output} tool_run.end_time = datetime.utcnow() tool_run.events.append({"name": "end", "time": tool_run.end_time}) self._end_trace(tool_run) self._on_tool_end(tool_run) def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any, ) -> None: """Handle an error for a tool run.""" if not run_id: raise TracerException("No run_id provided for on_tool_error callback.") tool_run = self.run_map.get(str(run_id)) if tool_run is None or tool_run.run_type != RunTypeEnum.tool: raise TracerException("No tool Run found to be traced") tool_run.error = repr(error) tool_run.end_time = datetime.utcnow() tool_run.events.append({"name": "error", "time": tool_run.end_time}) self._end_trace(tool_run) self._on_tool_error(tool_run) def on_retriever_start( self, serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> None: """Run when Retriever starts running.""" parent_run_id_ = str(parent_run_id) if parent_run_id else None execution_order = self._get_execution_order(parent_run_id_) start_time = datetime.utcnow() if metadata: kwargs.update({"metadata": metadata}) retrieval_run = Run( id=run_id, name="Retriever", parent_run_id=parent_run_id, serialized=serialized, inputs={"query": query}, extra=kwargs, events=[{"name": "start", "time": start_time}], start_time=start_time, execution_order=execution_order, child_execution_order=execution_order, child_runs=[], run_type=RunTypeEnum.retriever, ) self._start_trace(retrieval_run) self._on_retriever_start(retrieval_run) def on_retriever_error( self, error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any, ) -> None: """Run when Retriever errors.""" if not run_id: raise TracerException("No run_id provided for on_retriever_error callback.") retrieval_run = self.run_map.get(str(run_id)) if retrieval_run is None or retrieval_run.run_type != RunTypeEnum.retriever: raise TracerException("No retriever Run found to be traced") retrieval_run.error = repr(error) retrieval_run.end_time = datetime.utcnow() retrieval_run.events.append({"name": "error", "time": retrieval_run.end_time}) self._end_trace(retrieval_run) self._on_retriever_error(retrieval_run) def on_retriever_end( self, documents: Sequence[Document], *, run_id: UUID, **kwargs: Any ) -> None: """Run when Retriever ends running.""" if not run_id: raise TracerException("No run_id provided for on_retriever_end callback.") retrieval_run = self.run_map.get(str(run_id)) if retrieval_run is None or retrieval_run.run_type != RunTypeEnum.retriever: raise TracerException("No retriever Run found to be traced") retrieval_run.outputs = {"documents": documents} retrieval_run.end_time = datetime.utcnow() retrieval_run.events.append({"name": "end", "time": retrieval_run.end_time}) self._end_trace(retrieval_run) self._on_retriever_end(retrieval_run) def __deepcopy__(self, memo: dict) -> BaseTracer: """Deepcopy the tracer.""" return self def __copy__(self) -> BaseTracer: """Copy the tracer.""" return self def _on_llm_start(self, run: Run) -> None: """Process the LLM Run upon start.""" def _on_llm_end(self, run: Run) -> None: """Process the LLM Run.""" def _on_llm_error(self, run: Run) -> None: """Process the LLM Run upon error.""" def _on_chain_start(self, run: Run) -> None: """Process the Chain Run upon start.""" def _on_chain_end(self, run: Run) -> None: """Process the Chain Run.""" def _on_chain_error(self, run: Run) -> None: """Process the Chain Run upon error.""" def _on_tool_start(self, run: Run) -> None: """Process the Tool Run upon start.""" def _on_tool_end(self, run: Run) -> None: """Process the Tool Run.""" def _on_tool_error(self, run: Run) -> None: """Process the Tool Run upon error.""" def _on_chat_model_start(self, run: Run) -> None: """Process the Chat Model Run upon start.""" def _on_retriever_start(self, run: Run) -> None: """Process the Retriever Run upon start.""" def _on_retriever_end(self, run: Run) -> None: """Process the Retriever Run.""" def _on_retriever_error(self, run: Run) -> None: """Process the Retriever Run upon error."""
langchain.callbacks.tracers.base.BaseTracer¶ class langchain.callbacks.tracers.base.BaseTracer(**kwargs: Any)[source]¶ Bases: BaseCallbackHandler, ABC Base interface for tracers. Methods __init__(**kwargs) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None[source]¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
Base interface for tracers.
6f6ee550-cc8f-4df4-891f-4ebbc54e66bc
[ "typing.Optional", "requests", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.ifttt.IFTTTWebhook
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.ifttt.IFTTTWebhook.html#langchain.tools.ifttt.IFTTTWebhook
class IFTTTWebhook(BaseTool): """IFTTT Webhook. Args: name: name of the tool description: description of the tool url: url to hit with the json event. """ url: str def _run( self, tool_input: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: body = {"this": tool_input} response = requests.post(self.url, data=body) return response.text async def _arun( self, tool_input: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: raise NotImplementedError("Not implemented.")
langchain.tools.ifttt.IFTTTWebhook¶ class langchain.tools.ifttt.IFTTTWebhook(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, url: str)[source]¶ Bases: BaseTool IFTTT Webhook. Parameters name – name of the tool description – description of the tool url – url to hit with the json event. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str [Required]¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str [Required]¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param url: str [Required]¶ param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
IFTTT Webhook.
1f51b8e9-dde1-431e-a12f-4c791a8ce566
[ "__future__.annotations", "json", "typing.Optional", "typing.Type", "requests", "yaml", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.plugin.ApiConfig
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.ApiConfig.html#langchain.tools.plugin.ApiConfig
class ApiConfig(BaseModel): type: str url: str has_user_authentication: Optional[bool] = False
langchain.tools.plugin.ApiConfig¶ class langchain.tools.plugin.ApiConfig(*, type: str, url: str, has_user_authentication: Optional[bool] = False)[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param has_user_authentication: Optional[bool] = False¶ param type: str [Required]¶ param url: str [Required]¶
Create a new model by parsing and validating input data from keyword arguments.
1c97069f-70d0-4d7b-bb96-3be8abe9c4bd
[ "__future__.annotations", "json", "typing.Optional", "typing.Type", "requests", "yaml", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.plugin.AIPlugin
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPlugin.html#langchain.tools.plugin.AIPlugin
class AIPlugin(BaseModel): """AI Plugin Definition.""" schema_version: str name_for_model: str name_for_human: str description_for_model: str description_for_human: str auth: Optional[dict] = None api: ApiConfig logo_url: Optional[str] contact_email: Optional[str] legal_info_url: Optional[str] @classmethod def from_url(cls, url: str) -> AIPlugin: """Instantiate AIPlugin from a URL.""" response = requests.get(url).json() return cls(**response)
langchain.tools.plugin.AIPlugin¶ class langchain.tools.plugin.AIPlugin(*, schema_version: str, name_for_model: str, name_for_human: str, description_for_model: str, description_for_human: str, auth: Optional[dict] = None, api: ApiConfig, logo_url: Optional[str] = None, contact_email: Optional[str] = None, legal_info_url: Optional[str] = None)[source]¶ Bases: BaseModel AI Plugin Definition. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api: langchain.tools.plugin.ApiConfig [Required]¶ param auth: Optional[dict] = None¶ param contact_email: Optional[str] = None¶ param description_for_human: str [Required]¶ param description_for_model: str [Required]¶ param legal_info_url: Optional[str] = None¶ param logo_url: Optional[str] = None¶ param name_for_human: str [Required]¶ param name_for_model: str [Required]¶ param schema_version: str [Required]¶ classmethod from_url(url: str) → AIPlugin[source]¶ Instantiate AIPlugin from a URL.
AI Plugin Definition.
f9ba20ad-aea1-47f1-a8bc-c1299a75a44f
[ "__future__.annotations", "json", "typing.Optional", "typing.Type", "requests", "yaml", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.plugin.marshal_spec
Function
https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.marshal_spec.html#langchain.tools.plugin.marshal_spec
def marshal_spec(txt: str) -> dict: """Convert the yaml or json serialized spec to a dict. Args: txt: The yaml or json serialized spec. Returns: dict: The spec as a dict. """ try: return json.loads(txt) except json.JSONDecodeError: return yaml.safe_load(txt)
langchain.tools.plugin.marshal_spec¶ langchain.tools.plugin.marshal_spec(txt: str) → dict[source]¶ Convert the yaml or json serialized spec to a dict. Parameters txt – The yaml or json serialized spec. Returns The spec as a dict. Return type dict
Convert the yaml or json serialized spec to a dict.
06a89d48-a3bd-434e-ae98-0ad6775b538e
[ "__future__.annotations", "json", "typing.Optional", "typing.Type", "requests", "yaml", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.plugin.AIPluginToolSchema
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginToolSchema.html#langchain.tools.plugin.AIPluginToolSchema
class AIPluginToolSchema(BaseModel): """AIPLuginToolSchema.""" tool_input: Optional[str] = ""
langchain.tools.plugin.AIPluginToolSchema¶ class langchain.tools.plugin.AIPluginToolSchema(*, tool_input: Optional[str] = '')[source]¶ Bases: BaseModel AIPLuginToolSchema. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param tool_input: Optional[str] = ''¶
AIPLuginToolSchema.
5a2ecf2c-304d-48a6-be29-fdf5eba62d37
[ "__future__.annotations", "json", "typing.Optional", "typing.Type", "requests", "yaml", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool" ]
langchain.tools.plugin.AIPluginTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.plugin.AIPluginTool.html#langchain.tools.plugin.AIPluginTool
class AIPluginTool(BaseTool): plugin: AIPlugin api_spec: str args_schema: Type[AIPluginToolSchema] = AIPluginToolSchema @classmethod def from_plugin_url(cls, url: str) -> AIPluginTool: plugin = AIPlugin.from_url(url) description = ( f"Call this tool to get the OpenAPI spec (and usage guide) " f"for interacting with the {plugin.name_for_human} API. " f"You should only call this ONCE! What is the " f"{plugin.name_for_human} API useful for? " ) + plugin.description_for_human open_api_spec_str = requests.get(plugin.api.url).text open_api_spec = marshal_spec(open_api_spec_str) api_spec = ( f"Usage Guide: {plugin.description_for_model}\n\n" f"OpenAPI Spec: {open_api_spec}" ) return cls( name=plugin.name_for_model, description=description, plugin=plugin, api_spec=api_spec, ) def _run( self, tool_input: Optional[str] = "", run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" return self.api_spec async def _arun( self, tool_input: Optional[str] = None, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" return self.api_spec
langchain.tools.plugin.AIPluginTool¶ class langchain.tools.plugin.AIPluginTool(*, name: str, description: str, args_schema: ~typing.Type[~langchain.tools.plugin.AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>, return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, plugin: ~langchain.tools.plugin.AIPlugin, api_spec: str)[source]¶ Bases: BaseTool Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_spec: str [Required]¶ param args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str [Required]¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str [Required]¶ The unique name of the tool that clearly communicates its purpose. param plugin: AIPlugin [Required]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. classmethod from_plugin_url(url: str) → AIPluginTool[source]¶ validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Create a new model by parsing and validating input data from keyword arguments.
e181f8c7-b50e-4971-b844-7959c4238543
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.SchemaAnnotationError
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.SchemaAnnotationError.html#langchain.tools.base.SchemaAnnotationError
class SchemaAnnotationError(TypeError): """Raised when 'args_schema' is missing or has an incorrect type annotation."""
langchain.tools.base.SchemaAnnotationError¶ class langchain.tools.base.SchemaAnnotationError[source]¶ Bases: TypeError Raised when ‘args_schema’ is missing or has an incorrect type annotation. add_note()¶ Exception.add_note(note) – add a note to the exception with_traceback()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. args¶
Raised when 'args_schema' is missing or has an incorrect type annotation.
2b631696-d772-4335-8f37-e6776feb4bc4
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.ToolMetaclass
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.ToolMetaclass.html#langchain.tools.base.ToolMetaclass
class ToolMetaclass(ModelMetaclass): """Metaclass for BaseTool to ensure the provided args_schema doesn't silently ignored.""" def __new__( cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict ) -> ToolMetaclass: """Create the definition of the new tool class.""" schema_type: Optional[Type[BaseModel]] = dct.get("args_schema") if schema_type is not None: schema_annotations = dct.get("__annotations__", {}) args_schema_type = schema_annotations.get("args_schema", None) if args_schema_type is None or args_schema_type == BaseModel: # Throw errors for common mis-annotations. # TODO: Use get_args / get_origin and fully # specify valid annotations. typehint_mandate = """ class ChildTool(BaseTool): ... args_schema: Type[BaseModel] = SchemaClass ...""" raise SchemaAnnotationError( f"Tool definition for {name} must include valid type annotations" f" for argument 'args_schema' to behave as expected.\n" f"Expected annotation of 'Type[BaseModel]'" f" but got '{args_schema_type}'.\n" f"Expected class looks like:\n" f"{typehint_mandate}" ) # Pass through to Pydantic's metaclass return super().__new__(cls, name, bases, dct)
langchain.tools.base.ToolMetaclass¶ class langchain.tools.base.ToolMetaclass(name: str, bases: Tuple[Type, ...], dct: dict)[source]¶ Bases: ModelMetaclass Metaclass for BaseTool to ensure the provided args_schema doesn’t silently ignored. Create the definition of the new tool class. Methods __init__(*args, **kwargs) mro() Return a type's method resolution order. register(subclass) Register a virtual subclass of an ABC. __call__(*args, **kwargs)¶ Call self as a function. mro()¶ Return a type’s method resolution order. register(subclass)¶ Register a virtual subclass of an ABC. Returns the subclass, to allow usage as a class decorator.
Metaclass for BaseTool to ensure the provided args_schema
50078ead-38a0-41be-9c22-b8202a16a62f
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.create_schema_from_function
Function
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.create_schema_from_function.html#langchain.tools.base.create_schema_from_function
def create_schema_from_function( model_name: str, func: Callable, ) -> Type[BaseModel]: """Create a pydantic schema from a function's signature. Args: model_name: Name to assign to the generated pydandic schema func: Function to generate the schema from Returns: A pydantic model with the same arguments as the function """ # https://docs.pydantic.dev/latest/usage/validation_decorator/ validated = validate_arguments(func, config=_SchemaConfig) # type: ignore inferred_model = validated.model # type: ignore if "run_manager" in inferred_model.__fields__: del inferred_model.__fields__["run_manager"] if "callbacks" in inferred_model.__fields__: del inferred_model.__fields__["callbacks"] # Pydantic adds placeholder virtual fields we need to strip valid_properties = _get_filtered_args(inferred_model, func) return _create_subset_model( f"{model_name}Schema", inferred_model, list(valid_properties) )
langchain.tools.base.create_schema_from_function¶ langchain.tools.base.create_schema_from_function(model_name: str, func: Callable) → Type[BaseModel][source]¶ Create a pydantic schema from a function’s signature. :param model_name: Name to assign to the generated pydandic schema :param func: Function to generate the schema from Returns A pydantic model with the same arguments as the function
Create a pydantic schema from a function's signature.
adca598e-7bcc-4368-a649-2607b98471a3
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.ToolException
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.ToolException.html#langchain.tools.base.ToolException
class ToolException(Exception): """An optional exception that tool throws when execution error occurs. When this exception is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red on the console. """ pass
langchain.tools.base.ToolException¶ class langchain.tools.base.ToolException[source]¶ Bases: Exception An optional exception that tool throws when execution error occurs. When this exception is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red on the console. add_note()¶ Exception.add_note(note) – add a note to the exception with_traceback()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. args¶
An optional exception that tool throws when execution error occurs.
17f4130c-e024-4c7c-92ea-99d5aaa78441
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.BaseTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.BaseTool.html#langchain.tools.base.BaseTool
class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass): """Interface LangChain tools must implement.""" name: str """The unique name of the tool that clearly communicates its purpose.""" description: str """Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. """ args_schema: Optional[Type[BaseModel]] = None """Pydantic model class to validate and parse the tool's input arguments.""" return_direct: bool = False """Whether to return the tool's output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. """ verbose: bool = False """Whether to log the tool's progress.""" callbacks: Callbacks = Field(default=None, exclude=True) """Callbacks to be called during tool execution.""" callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True) """Deprecated. Please use callbacks instead.""" tags: Optional[List[str]] = None """Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in `callbacks`. You can use these to eg identify a specific instance of a tool with its use case. """ metadata: Optional[Dict[str, Any]] = None """Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in `callbacks`. You can use these to eg identify a specific instance of a tool with its use case. """ handle_tool_error: Optional[ Union[bool, str, Callable[[ToolException], str]] ] = False """Handle the content of the ToolException thrown.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def is_single_input(self) -> bool: """Whether the tool only accepts a single input.""" keys = {k for k in self.args if k != "kwargs"} return len(keys) == 1 @property def args(self) -> dict: if self.args_schema is not None: return self.args_schema.schema()["properties"] else: schema = create_schema_from_function(self.name, self._run) return schema.schema()["properties"] def _parse_input( self, tool_input: Union[str, Dict], ) -> Union[str, Dict[str, Any]]: """Convert tool input to pydantic model.""" input_args = self.args_schema if isinstance(tool_input, str): if input_args is not None: key_ = next(iter(input_args.__fields__.keys())) input_args.validate({key_: tool_input}) return tool_input else: if input_args is not None: result = input_args.parse_obj(tool_input) return {k: v for k, v in result.dict().items() if k in tool_input} return tool_input @root_validator() def raise_deprecation(cls, values: Dict) -> Dict: """Raise deprecation warning if callback_manager is used.""" if values.get("callback_manager") is not None: warnings.warn( "callback_manager is deprecated. Please use callbacks instead.", DeprecationWarning, ) values["callbacks"] = values.pop("callback_manager", None) return values @abstractmethod def _run( self, *args: Any, **kwargs: Any, ) -> Any: """Use the tool. Add run_manager: Optional[CallbackManagerForToolRun] = None to child implementations to enable tracing, """ @abstractmethod async def _arun( self, *args: Any, **kwargs: Any, ) -> Any: """Use the tool asynchronously. Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None to child implementations to enable tracing, """ def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: # For backwards compatibility, if run_input is a string, # pass as a positional argument. if isinstance(tool_input, str): return (tool_input,), {} else: return (), tool_input def run( self, tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = "green", color: Optional[str] = "green", callbacks: Callbacks = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> Any: """Run the tool.""" parsed_input = self._parse_input(tool_input) if not self.verbose and verbose is not None: verbose_ = verbose else: verbose_ = self.verbose callback_manager = CallbackManager.configure( callbacks, self.callbacks, verbose_, tags, self.tags, metadata, self.metadata, ) # TODO: maybe also pass through run_manager is _run supports kwargs new_arg_supported = signature(self._run).parameters.get("run_manager") run_manager = callback_manager.on_tool_start( {"name": self.name, "description": self.description}, tool_input if isinstance(tool_input, str) else str(tool_input), color=start_color, **kwargs, ) try: tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) observation = ( self._run(*tool_args, run_manager=run_manager, **tool_kwargs) if new_arg_supported else self._run(*tool_args, **tool_kwargs) ) except ToolException as e: if not self.handle_tool_error: run_manager.on_tool_error(e) raise e elif isinstance(self.handle_tool_error, bool): if e.args: observation = e.args[0] else: observation = "Tool execution error" elif isinstance(self.handle_tool_error, str): observation = self.handle_tool_error elif callable(self.handle_tool_error): observation = self.handle_tool_error(e) else: raise ValueError( f"Got unexpected type of `handle_tool_error`. Expected bool, str " f"or callable. Received: {self.handle_tool_error}" ) run_manager.on_tool_end( str(observation), color="red", name=self.name, **kwargs ) return observation except (Exception, KeyboardInterrupt) as e: run_manager.on_tool_error(e) raise e else: run_manager.on_tool_end( str(observation), color=color, name=self.name, **kwargs ) return observation async def arun( self, tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = "green", color: Optional[str] = "green", callbacks: Callbacks = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> Any: """Run the tool asynchronously.""" parsed_input = self._parse_input(tool_input) if not self.verbose and verbose is not None: verbose_ = verbose else: verbose_ = self.verbose callback_manager = AsyncCallbackManager.configure( callbacks, self.callbacks, verbose_, tags, self.tags, metadata, self.metadata, ) new_arg_supported = signature(self._arun).parameters.get("run_manager") run_manager = await callback_manager.on_tool_start( {"name": self.name, "description": self.description}, tool_input if isinstance(tool_input, str) else str(tool_input), color=start_color, **kwargs, ) try: # We then call the tool on the tool input to get an observation tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) observation = ( await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs) if new_arg_supported else await self._arun(*tool_args, **tool_kwargs) ) except ToolException as e: if not self.handle_tool_error: await run_manager.on_tool_error(e) raise e elif isinstance(self.handle_tool_error, bool): if e.args: observation = e.args[0] else: observation = "Tool execution error" elif isinstance(self.handle_tool_error, str): observation = self.handle_tool_error elif callable(self.handle_tool_error): observation = self.handle_tool_error(e) else: raise ValueError( f"Got unexpected type of `handle_tool_error`. Expected bool, str " f"or callable. Received: {self.handle_tool_error}" ) await run_manager.on_tool_end( str(observation), color="red", name=self.name, **kwargs ) return observation except (Exception, KeyboardInterrupt) as e: await run_manager.on_tool_error(e) raise e else: await run_manager.on_tool_end( str(observation), color=color, name=self.name, **kwargs ) return observation def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str: """Make tool callable.""" return self.run(tool_input, callbacks=callbacks)
langchain.tools.base.BaseTool¶ class langchain.tools.base.BaseTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]¶ Bases: ABC, BaseModel Interface LangChain tools must implement. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[pydantic.main.BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶ Callbacks to be called during tool execution. param description: str [Required]¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str [Required]¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str[source]¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶ Run the tool asynchronously. validator raise_deprecation  »  all fields[source]¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Interface LangChain tools must implement.
3d8fc5cd-d35a-4f24-83b5-54a79911ef26
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.Tool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.Tool.html#langchain.tools.base.Tool
class Tool(BaseTool): """Tool that takes in function or coroutine directly.""" description: str = "" func: Callable[..., str] """The function to run when the tool is called.""" coroutine: Optional[Callable[..., Awaitable[str]]] = None """The asynchronous version of the function.""" @property def args(self) -> dict: """The tool's input arguments.""" if self.args_schema is not None: return self.args_schema.schema()["properties"] # For backwards compatibility, if the function signature is ambiguous, # assume it takes a single string input. return {"tool_input": {"type": "string"}} def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: """Convert tool input to pydantic model.""" args, kwargs = super()._to_args_and_kwargs(tool_input) # For backwards compatibility. The tool must be run with a single input all_args = list(args) + list(kwargs.values()) if len(all_args) != 1: raise ToolException( f"Too many arguments to single-input tool {self.name}." f" Args: {all_args}" ) return tuple(all_args), {} def _run( self, *args: Any, run_manager: Optional[CallbackManagerForToolRun] = None, **kwargs: Any, ) -> Any: """Use the tool.""" new_argument_supported = signature(self.func).parameters.get("callbacks") return ( self.func( *args, callbacks=run_manager.get_child() if run_manager else None, **kwargs, ) if new_argument_supported else self.func(*args, **kwargs) ) async def _arun( self, *args: Any, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, **kwargs: Any, ) -> Any: """Use the tool asynchronously.""" if self.coroutine: new_argument_supported = signature(self.coroutine).parameters.get( "callbacks" ) return ( await self.coroutine( *args, callbacks=run_manager.get_child() if run_manager else None, **kwargs, ) if new_argument_supported else await self.coroutine(*args, **kwargs) ) raise NotImplementedError("Tool does not support async") # TODO: this is for backwards compatibility, remove in future def __init__( self, name: str, func: Callable, description: str, **kwargs: Any ) -> None: """Initialize tool.""" super(Tool, self).__init__( name=name, func=func, description=description, **kwargs ) @classmethod def from_function( cls, func: Callable, name: str, # We keep these required to support backwards compatibility description: str, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, **kwargs: Any, ) -> Tool: """Initialize tool from a function.""" return cls( name=name, func=func, description=description, return_direct=return_direct, args_schema=args_schema, **kwargs, )
langchain.tools.base.Tool¶ class langchain.tools.base.Tool(name: str, func: Callable, description: str, *, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, coroutine: Optional[Callable[[...], Awaitable[str]]] = None)[source]¶ Bases: BaseTool Tool that takes in function or coroutine directly. Initialize tool. param args_schema: Optional[Type[pydantic.main.BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶ Callbacks to be called during tool execution. param coroutine: Optional[Callable[[...], Awaitable[str]]] = None¶ The asynchronous version of the function. param description: str = ''¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param func: Callable[[...], str] [Required]¶ The function to run when the tool is called. param handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str [Required]¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, **kwargs: Any) → Tool[source]¶ Initialize tool from a function. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ The tool’s input arguments. property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Tool that takes in function or coroutine directly.
e6ca7065-1283-4730-adc3-4585b664ed1f
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.StructuredTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.StructuredTool.html#langchain.tools.base.StructuredTool
class StructuredTool(BaseTool): """Tool that can operate on any number of inputs.""" description: str = "" args_schema: Type[BaseModel] = Field(..., description="The tool schema.") """The input arguments' schema.""" func: Callable[..., Any] """The function to run when the tool is called.""" coroutine: Optional[Callable[..., Awaitable[Any]]] = None """The asynchronous version of the function.""" @property def args(self) -> dict: """The tool's input arguments.""" return self.args_schema.schema()["properties"] def _run( self, *args: Any, run_manager: Optional[CallbackManagerForToolRun] = None, **kwargs: Any, ) -> Any: """Use the tool.""" new_argument_supported = signature(self.func).parameters.get("callbacks") return ( self.func( *args, callbacks=run_manager.get_child() if run_manager else None, **kwargs, ) if new_argument_supported else self.func(*args, **kwargs) ) async def _arun( self, *args: Any, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, **kwargs: Any, ) -> str: """Use the tool asynchronously.""" if self.coroutine: new_argument_supported = signature(self.coroutine).parameters.get( "callbacks" ) return ( await self.coroutine( *args, callbacks=run_manager.get_child() if run_manager else None, **kwargs, ) if new_argument_supported else await self.coroutine(*args, **kwargs) ) raise NotImplementedError("Tool does not support async") @classmethod def from_function( cls, func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True, **kwargs: Any, ) -> StructuredTool: """Create tool from a given function. A classmethod that helps to create a tool from a function. Args: func: The function from which to create a tool name: The name of the tool. Defaults to the function name description: The description of the tool. Defaults to the function docstring return_direct: Whether to return the result directly or as a callback args_schema: The schema of the tool's input arguments infer_schema: Whether to infer the schema from the function's signature **kwargs: Additional arguments to pass to the tool Returns: The tool Examples: ... code-block:: python def add(a: int, b: int) -> int: \"\"\"Add two numbers\"\"\" return a + b tool = StructuredTool.from_function(add) tool.run(1, 2) # 3 """ name = name or func.__name__ description = description or func.__doc__ assert ( description is not None ), "Function must have a docstring if description not provided." # Description example: # search_api(query: str) - Searches the API for the query. description = f"{name}{signature(func)} - {description.strip()}" _args_schema = args_schema if _args_schema is None and infer_schema: _args_schema = create_schema_from_function(f"{name}Schema", func) return cls( name=name, func=func, args_schema=_args_schema, description=description, return_direct=return_direct, **kwargs, )
langchain.tools.base.StructuredTool¶ class langchain.tools.base.StructuredTool(*, name: str, description: str = '', args_schema: Type[BaseModel], return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, func: Callable[[...], Any], coroutine: Optional[Callable[[...], Awaitable[Any]]] = None)[source]¶ Bases: BaseTool Tool that can operate on any number of inputs. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Type[pydantic.main.BaseModel] [Required]¶ The input arguments’ schema. The tool schema. param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶ Callbacks to be called during tool execution. param coroutine: Optional[Callable[[...], Awaitable[Any]]] = None¶ The asynchronous version of the function. param description: str = ''¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param func: Callable[[...], Any] [Required]¶ The function to run when the tool is called. param handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str [Required]¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. classmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) → StructuredTool[source]¶ Create tool from a given function. A classmethod that helps to create a tool from a function. Parameters func – The function from which to create a tool name – The name of the tool. Defaults to the function name description – The description of the tool. Defaults to the function docstring return_direct – Whether to return the result directly or as a callback args_schema – The schema of the tool’s input arguments infer_schema – Whether to infer the schema from the function’s signature **kwargs – Additional arguments to pass to the tool Returns The tool Examples … code-block:: python def add(a: int, b: int) -> int:“””Add two numbers””” return a + b tool = StructuredTool.from_function(add) tool.run(1, 2) # 3 validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ The tool’s input arguments. property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Tool that can operate on any number of inputs.
7d083ad0-ace6-4dc2-b14b-837f55dba960
[ "__future__.annotations", "warnings", "abc.ABC", "abc.abstractmethod", "inspect.signature", "typing.Any", "typing.Awaitable", "typing.Callable", "typing.Dict", "typing.List", "typing.Optional", "typing.Tuple", "typing.Type", "typing.Union", "pydantic.BaseModel", "pydantic.Extra", "pydantic.Field", "pydantic.create_model", "pydantic.root_validator", "pydantic.validate_arguments", "pydantic.main.ModelMetaclass", "langchain.callbacks.base.BaseCallbackManager", "langchain.callbacks.manager.AsyncCallbackManager", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManager", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.callbacks.manager.Callbacks" ]
langchain.tools.base.tool
Function
https://api.python.langchain.com/en/latest/tools/langchain.tools.base.tool.html#langchain.tools.base.tool
def tool( *args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True, ) -> Callable: """Make tools out of functions, can be used with or without arguments. Args: *args: The arguments to the tool. return_direct: Whether to return directly from the tool rather than continuing the agent loop. args_schema: optional argument schema for user to specify infer_schema: Whether to infer the schema of the arguments from the function's signature. This also makes the resultant tool accept a dictionary input to its `run()` function. Requires: - Function must be of type (str) -> str - Function must have a docstring Examples: .. code-block:: python @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return """ def _make_with_name(tool_name: str) -> Callable: def _make_tool(func: Callable) -> BaseTool: if infer_schema or args_schema is not None: return StructuredTool.from_function( func, name=tool_name, return_direct=return_direct, args_schema=args_schema, infer_schema=infer_schema, ) # If someone doesn't want a schema applied, we must treat it as # a simple string->string function assert func.__doc__ is not None, "Function must have a docstring" return Tool( name=tool_name, func=func, description=f"{tool_name} tool", return_direct=return_direct, ) return _make_tool if len(args) == 1 and isinstance(args[0], str): # if the argument is a string, then we use the string as the tool name # Example usage: @tool("search", return_direct=True) return _make_with_name(args[0]) elif len(args) == 1 and callable(args[0]): # if the argument is a function, then we use the function name as the tool name # Example usage: @tool return _make_with_name(args[0].__name__)(args[0]) elif len(args) == 0: # if there are no arguments, then we use the function name as the tool name # Example usage: @tool(return_direct=True) def _partial(func: Callable[[str], str]) -> BaseTool: return _make_with_name(func.__name__)(func) return _partial else: raise ValueError("Too many arguments for tool decorator")
langchain.tools.base.tool¶ langchain.tools.base.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[BaseModel]] = None, infer_schema: bool = True) → Callable[source]¶ Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct – Whether to return directly from the tool rather than continuing the agent loop. args_schema – optional argument schema for user to specify infer_schema – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return
Make tools out of functions, can be used with or without arguments.
29ed928c-767f-475e-8018-05f4e5ad847d
[ "typing.TypedDict", "langchain.tools.BaseTool", "langchain.tools.StructuredTool" ]
langchain.tools.convert_to_openai.FunctionDescription
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.convert_to_openai.FunctionDescription.html#langchain.tools.convert_to_openai.FunctionDescription
class FunctionDescription(TypedDict): """Representation of a callable function to the OpenAI API.""" name: str """The name of the function.""" description: str """A description of the function.""" parameters: dict """The parameters of the function."""
langchain.tools.convert_to_openai.FunctionDescription¶ class langchain.tools.convert_to_openai.FunctionDescription[source]¶ Bases: TypedDict Representation of a callable function to the OpenAI API. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() Attributes name The name of the function. description A description of the function. parameters The parameters of the function. clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values¶ description: str¶ A description of the function. name: str¶ The name of the function. parameters: dict¶ The parameters of the function.
Representation of a callable function to the OpenAI API.
c3e24186-fa4e-4c14-be23-58e20fd1fa56
[ "typing.TypedDict", "langchain.tools.BaseTool", "langchain.tools.StructuredTool" ]
langchain.tools.convert_to_openai.format_tool_to_openai_function
Function
https://api.python.langchain.com/en/latest/tools/langchain.tools.convert_to_openai.format_tool_to_openai_function.html#langchain.tools.convert_to_openai.format_tool_to_openai_function
def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription: """Format tool into the OpenAI function API.""" if isinstance(tool, StructuredTool): schema_ = tool.args_schema.schema() # Bug with required missing for structured tools. required = sorted(schema_["properties"]) # BUG WORKAROUND return { "name": tool.name, "description": tool.description, "parameters": { "type": "object", "properties": schema_["properties"], "required": required, }, } else: if tool.args_schema: parameters = tool.args_schema.schema() else: parameters = { # This is a hack to get around the fact that some tools # do not expose an args_schema, and expect an argument # which is a string. # And Open AI does not support an array type for the # parameters. "properties": { "__arg1": {"title": "__arg1", "type": "string"}, }, "required": ["__arg1"], "type": "object", } return { "name": tool.name, "description": tool.description, "parameters": parameters, }
langchain.tools.convert_to_openai.format_tool_to_openai_function¶ langchain.tools.convert_to_openai.format_tool_to_openai_function(tool: BaseTool) → FunctionDescription[source]¶ Format tool into the OpenAI function API.
Format tool into the OpenAI function API.
4da8df8f-d31e-48d5-beb7-840f73e40c74
[ "ast", "asyncio", "re", "sys", "contextlib.redirect_stdout", "io.StringIO", "typing.Any", "typing.Dict", "typing.Optional", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.PythonREPL" ]
langchain.tools.python.tool.sanitize_input
Function
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.sanitize_input.html#langchain.tools.python.tool.sanitize_input
def sanitize_input(query: str) -> str: """Sanitize input to the python REPL. Remove whitespace, backtick & python (if llm mistakes python console as terminal) Args: query: The query to sanitize Returns: str: The sanitized query """ # Removes `, whitespace & python from start query = re.sub(r"^(\s|`)*(?i:python)?\s*", "", query) # Removes whitespace & ` from end query = re.sub(r"(\s|`)*$", "", query) return query
langchain.tools.python.tool.sanitize_input¶ langchain.tools.python.tool.sanitize_input(query: str) → str[source]¶ Sanitize input to the python REPL. Remove whitespace, backtick & python (if llm mistakes python console as terminal) Parameters query – The query to sanitize Returns The sanitized query Return type str
Sanitize input to the python REPL.
fbb1075a-99de-4754-9d04-f3cc04dd6337
[ "ast", "asyncio", "re", "sys", "contextlib.redirect_stdout", "io.StringIO", "typing.Any", "typing.Dict", "typing.Optional", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.PythonREPL" ]
langchain.tools.python.tool.PythonREPLTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonREPLTool.html#langchain.tools.python.tool.PythonREPLTool
class PythonREPLTool(BaseTool): """A tool for running python code in a REPL.""" name = "Python_REPL" description = ( "A Python shell. Use this to execute python commands. " "Input should be a valid python command. " "If you want to see the output of a value, you should print it out " "with `print(...)`." ) python_repl: PythonREPL = Field(default_factory=_get_default_python_repl) sanitize_input: bool = True def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> Any: """Use the tool.""" if self.sanitize_input: query = sanitize_input(query) return self.python_repl.run(query) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> Any: """Use the tool asynchronously.""" if self.sanitize_input: query = sanitize_input(query) loop = asyncio.get_running_loop() result = await loop.run_in_executor(None, self.run, query) return result
langchain.tools.python.tool.PythonREPLTool¶ class langchain.tools.python.tool.PythonREPLTool(*, name: str = 'Python_REPL', description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, python_repl: PythonREPL = None, sanitize_input: bool = True)[source]¶ Bases: BaseTool A tool for running python code in a REPL. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'Python_REPL'¶ The unique name of the tool that clearly communicates its purpose. param python_repl: langchain.utilities.python.PythonREPL [Optional]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param sanitize_input: bool = True¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
A tool for running python code in a REPL.
26d14314-2ef7-469d-852f-fc5aed1e0ea3
[ "ast", "asyncio", "re", "sys", "contextlib.redirect_stdout", "io.StringIO", "typing.Any", "typing.Dict", "typing.Optional", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.PythonREPL" ]
langchain.tools.python.tool.PythonAstREPLTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.python.tool.PythonAstREPLTool.html#langchain.tools.python.tool.PythonAstREPLTool
class PythonAstREPLTool(BaseTool): """A tool for running python code in a REPL.""" name = "python_repl_ast" description = ( "A Python shell. Use this to execute python commands. " "Input should be a valid python command. " "When using this tool, sometimes output is abbreviated - " "make sure it does not look abbreviated before using it in your answer." ) globals: Optional[Dict] = Field(default_factory=dict) locals: Optional[Dict] = Field(default_factory=dict) sanitize_input: bool = True @root_validator(pre=True) def validate_python_version(cls, values: Dict) -> Dict: """Validate valid python version.""" if sys.version_info < (3, 9): raise ValueError( "This tool relies on Python 3.9 or higher " "(as it uses new functionality in the `ast` module, " f"you have Python version: {sys.version}" ) return values def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" try: if self.sanitize_input: query = sanitize_input(query) tree = ast.parse(query) module = ast.Module(tree.body[:-1], type_ignores=[]) exec(ast.unparse(module), self.globals, self.locals) # type: ignore module_end = ast.Module(tree.body[-1:], type_ignores=[]) module_end_str = ast.unparse(module_end) # type: ignore io_buffer = StringIO() try: with redirect_stdout(io_buffer): ret = eval(module_end_str, self.globals, self.locals) if ret is None: return io_buffer.getvalue() else: return ret except Exception: with redirect_stdout(io_buffer): exec(module_end_str, self.globals, self.locals) return io_buffer.getvalue() except Exception as e: return "{}: {}".format(type(e).__name__, str(e)) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("PythonReplTool does not support async")
langchain.tools.python.tool.PythonAstREPLTool¶ class langchain.tools.python.tool.PythonAstREPLTool(*, name: str = 'python_repl_ast', description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, globals: Optional[Dict] = None, locals: Optional[Dict] = None, sanitize_input: bool = True)[source]¶ Bases: BaseTool A tool for running python code in a REPL. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param globals: Optional[Dict] [Optional]¶ param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param locals: Optional[Dict] [Optional]¶ param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'python_repl_ast'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param sanitize_input: bool = True¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. validator validate_python_version  »  all fields[source]¶ Validate valid python version. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
A tool for running python code in a REPL.
de668fdd-31dd-42e0-8ad2-b081fec3cf9d
[ "typing.Optional", "pydantic.BaseModel", "pydantic.Field", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.scenexplain.SceneXplainAPIWrapper" ]
langchain.tools.scenexplain.tool.SceneXplainInput
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainInput.html#langchain.tools.scenexplain.tool.SceneXplainInput
class SceneXplainInput(BaseModel): """Input for SceneXplain.""" query: str = Field(..., description="The link to the image to explain")
langchain.tools.scenexplain.tool.SceneXplainInput¶ class langchain.tools.scenexplain.tool.SceneXplainInput(*, query: str)[source]¶ Bases: BaseModel Input for SceneXplain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param query: str [Required]¶ The link to the image to explain
Input for SceneXplain.
99877561-a80e-4b2c-b017-7131575c6cee
[ "typing.Optional", "pydantic.BaseModel", "pydantic.Field", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.scenexplain.SceneXplainAPIWrapper" ]
langchain.tools.scenexplain.tool.SceneXplainTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.scenexplain.tool.SceneXplainTool.html#langchain.tools.scenexplain.tool.SceneXplainTool
class SceneXplainTool(BaseTool): """Tool that adds the capability to explain images.""" name = "image_explainer" description = ( "An Image Captioning Tool: Use this tool to generate a detailed caption " "for an image. The input can be an image file of any format, and " "the output will be a text description that covers every detail of the image." ) api_wrapper: SceneXplainAPIWrapper = Field(default_factory=SceneXplainAPIWrapper) def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return self.api_wrapper.run(query) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("SceneXplainTool does not support async")
langchain.tools.scenexplain.tool.SceneXplainTool¶ class langchain.tools.scenexplain.tool.SceneXplainTool(*, name: str = 'image_explainer', description: str = 'An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: SceneXplainAPIWrapper = None)[source]¶ Bases: BaseTool Tool that adds the capability to explain images. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]¶ param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'image_explainer'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Tool that adds the capability to explain images.
bf6ef509-231d-4d4f-8be2-ed6d0d5e18ba
[ "typing.Optional", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.bing_search.BingSearchAPIWrapper" ]
langchain.tools.bing_search.tool.BingSearchRun
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchRun.html#langchain.tools.bing_search.tool.BingSearchRun
class BingSearchRun(BaseTool): """Tool that adds the capability to query the Bing search API.""" name = "bing_search" description = ( "A wrapper around Bing Search. " "Useful for when you need to answer questions about current events. " "Input should be a search query." ) api_wrapper: BingSearchAPIWrapper def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" return self.api_wrapper.run(query) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("BingSearchRun does not support async")
langchain.tools.bing_search.tool.BingSearchRun¶ class langchain.tools.bing_search.tool.BingSearchRun(*, name: str = 'bing_search', description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: BingSearchAPIWrapper)[source]¶ Bases: BaseTool Tool that adds the capability to query the Bing search API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]¶ param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'bing_search'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Tool that adds the capability to query the Bing search API.
44d7b04d-b92f-46f3-bf4f-8d7c3ce70a41
[ "typing.Optional", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.utilities.bing_search.BingSearchAPIWrapper" ]
langchain.tools.bing_search.tool.BingSearchResults
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.bing_search.tool.BingSearchResults.html#langchain.tools.bing_search.tool.BingSearchResults
class BingSearchResults(BaseTool): """Tool that has capability to query the Bing Search API and get back json.""" name = "Bing Search Results JSON" description = ( "A wrapper around Bing Search. " "Useful for when you need to answer questions about current events. " "Input should be a search query. Output is a JSON array of the query results" ) num_results: int = 4 api_wrapper: BingSearchAPIWrapper def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" return str(self.api_wrapper.results(query, self.num_results)) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("BingSearchResults does not support async")
langchain.tools.bing_search.tool.BingSearchResults¶ class langchain.tools.bing_search.tool.BingSearchResults(*, name: str = 'Bing Search Results JSON', description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, num_results: int = 4, api_wrapper: BingSearchAPIWrapper)[source]¶ Bases: BaseTool Tool that has capability to query the Bing Search API and get back json. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]¶ param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'Bing Search Results JSON'¶ The unique name of the tool that clearly communicates its purpose. param num_results: int = 4¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Tool that has capability to query the Bing Search API and get back json.
52385334-4df0-4e27-a135-b793bae61bc4
[ "json", "typing.Optional", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.BaseTool" ]
langchain.tools.youtube.search.YouTubeSearchTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.youtube.search.YouTubeSearchTool.html#langchain.tools.youtube.search.YouTubeSearchTool
class YouTubeSearchTool(BaseTool): name = "youtube_search" description = ( "search for youtube videos associated with a person. " "the input to this tool should be a comma separated list, " "the first part contains a person name and the second a " "number that is the maximum number of video results " "to return aka num_results. the second part is optional" ) def _search(self, person: str, num_results: int) -> str: from youtube_search import YoutubeSearch results = YoutubeSearch(person, num_results).to_json() data = json.loads(results) url_suffix_list = [video["url_suffix"] for video in data["videos"]] return str(url_suffix_list) def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" values = query.split(",") person = values[0] if len(values) > 1: num_results = int(values[1]) else: num_results = 2 return self._search(person, num_results) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("YouTubeSearchTool does not yet support async")
langchain.tools.youtube.search.YouTubeSearchTool¶ class langchain.tools.youtube.search.YouTubeSearchTool(*, name: str = 'youtube_search', description: str = 'search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]¶ Bases: BaseTool Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'youtube_search'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Create a new model by parsing and validating input data from keyword arguments.
7bc3e1b3-a4aa-4070-8673-9d0bf885ae69
[ "typing.Any", "typing.Dict", "typing.Optional", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.tools.zapier.prompt.BASE_ZAPIER_TOOL_PROMPT", "langchain.utilities.zapier.ZapierNLAWrapper" ]
langchain.tools.zapier.tool.ZapierNLARunAction
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLARunAction.html#langchain.tools.zapier.tool.ZapierNLARunAction
class ZapierNLARunAction(BaseTool): """ Args: action_id: a specific action ID (from list actions) of the action to execute (the set api_key must be associated with the action owner) instructions: a natural language instruction string for using the action (eg. "get the latest email from Mike Knoop" for "Gmail: find email" action) params: a dict, optional. Any params provided will *override* AI guesses from `instructions` (see "understanding the AI guessing flow" here: https://nla.zapier.com/docs/using-the-api#ai-guessing) """ api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper) action_id: str params: Optional[dict] = None base_prompt: str = BASE_ZAPIER_TOOL_PROMPT zapier_description: str params_schema: Dict[str, str] = Field(default_factory=dict) name = "" description = "" @root_validator def set_name_description(cls, values: Dict[str, Any]) -> Dict[str, Any]: zapier_description = values["zapier_description"] params_schema = values["params_schema"] if "instructions" in params_schema: del params_schema["instructions"] # Ensure base prompt (if overridden) contains necessary input fields necessary_fields = {"{zapier_description}", "{params}"} if not all(field in values["base_prompt"] for field in necessary_fields): raise ValueError( "Your custom base Zapier prompt must contain input fields for " "{zapier_description} and {params}." ) values["name"] = zapier_description values["description"] = values["base_prompt"].format( zapier_description=zapier_description, params=str(list(params_schema.keys())), ) return values def _run( self, instructions: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the Zapier NLA tool to return a list of all exposed user actions.""" return self.api_wrapper.run_as_str(self.action_id, instructions, self.params) async def _arun( self, instructions: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the Zapier NLA tool to return a list of all exposed user actions.""" return await self.api_wrapper.arun_as_str( self.action_id, instructions, self.params, )
langchain.tools.zapier.tool.ZapierNLARunAction¶ class langchain.tools.zapier.tool.ZapierNLARunAction(*, name: str = '', description: str = '', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: ZapierNLAWrapper = None, action_id: str, params: Optional[dict] = None, base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description: str, params_schema: Dict[str, str] = None)[source]¶ Bases: BaseTool Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. Parameters action_id – a specific action ID (from list actions) of the action to execute (the set api_key must be associated with the action owner) instructions – a natural language instruction string for using the action (eg. “get the latest email from Mike Knoop” for “Gmail: find email” action) params – a dict, optional. Any params provided will override AI guesses from instructions (see “understanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param action_id: str [Required]¶ param api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]¶ param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = ''¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = ''¶ The unique name of the tool that clearly communicates its purpose. param params: Optional[dict] = None¶ param params_schema: Dict[str, str] [Optional]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. param zapier_description: str [Required]¶ __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. validator set_name_description  »  all fields[source]¶ property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Executes an action that is identified by action_id, must be exposed
14a57f7a-780c-4bc9-9399-190d2b5abc91
[ "typing.Any", "typing.Dict", "typing.Optional", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.base.BaseTool", "langchain.tools.zapier.prompt.BASE_ZAPIER_TOOL_PROMPT", "langchain.utilities.zapier.ZapierNLAWrapper" ]
langchain.tools.zapier.tool.ZapierNLAListActions
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.zapier.tool.ZapierNLAListActions.html#langchain.tools.zapier.tool.ZapierNLAListActions
class ZapierNLAListActions(BaseTool): """ Args: None """ name = "ZapierNLA_list_actions" description = BASE_ZAPIER_TOOL_PROMPT + ( "This tool returns a list of the user's exposed actions." ) api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper) def _run( self, _: str = "", run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the Zapier NLA tool to return a list of all exposed user actions.""" return self.api_wrapper.list_as_str() async def _arun( self, _: str = "", run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the Zapier NLA tool to return a list of all exposed user actions.""" return await self.api_wrapper.alist_as_str()
langchain.tools.zapier.tool.ZapierNLAListActions¶ class langchain.tools.zapier.tool.ZapierNLAListActions(*, name: str = 'ZapierNLA_list_actions', description: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\'s exposed actions.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, api_wrapper: ZapierNLAWrapper = None)[source]¶ Bases: BaseTool Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{“id”: str, “description”: str, “params”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see “understanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) Parameters None – Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]¶ param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\'s exposed actions.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'ZapierNLA_list_actions'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Returns a list of all exposed (enabled) actions associated with
01dc2655-fc26-4ce1-9654-de9c7e259415
[ "logging", "time.perf_counter", "typing.Any", "typing.Dict", "typing.Optional", "typing.Tuple", "pydantic.Field", "pydantic.validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.chains.llm.LLMChain", "langchain.chat_models.openai._import_tiktoken", "langchain.tools.base.BaseTool", "langchain.tools.powerbi.prompt.BAD_REQUEST_RESPONSE", "langchain.tools.powerbi.prompt.DEFAULT_FEWSHOT_EXAMPLES", "langchain.tools.powerbi.prompt.RETRY_RESPONSE", "langchain.utilities.powerbi.PowerBIDataset", "langchain.utilities.powerbi.json_to_md" ]
langchain.tools.powerbi.tool.QueryPowerBITool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.QueryPowerBITool.html#langchain.tools.powerbi.tool.QueryPowerBITool
class QueryPowerBITool(BaseTool): """Tool for querying a Power BI Dataset.""" name = "query_powerbi" description = """ Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification. Example Input: "How many rows are in table1?" """ # noqa: E501 llm_chain: LLMChain powerbi: PowerBIDataset = Field(exclude=True) examples: Optional[str] = DEFAULT_FEWSHOT_EXAMPLES session_cache: Dict[str, Any] = Field(default_factory=dict, exclude=True) max_iterations: int = 5 output_token_limit: int = 4000 tiktoken_model_name: Optional[str] = None # "cl100k_base" class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @validator("llm_chain") def validate_llm_chain_input_variables( # pylint: disable=E0213 cls, llm_chain: LLMChain ) -> LLMChain: """Make sure the LLM chain has the correct input variables.""" for var in llm_chain.prompt.input_variables: if var not in ["tool_input", "tables", "schemas", "examples"]: raise ValueError( "LLM chain for QueryPowerBITool must have input variables ['tool_input', 'tables', 'schemas', 'examples'], found %s", # noqa: C0301 E501 # pylint: disable=C0301 llm_chain.prompt.input_variables, ) return llm_chain def _check_cache(self, tool_input: str) -> Optional[str]: """Check if the input is present in the cache. If the value is a bad request, overwrite with the escalated version, if not present return None.""" if tool_input not in self.session_cache: return None return self.session_cache[tool_input] def _run( self, tool_input: str, run_manager: Optional[CallbackManagerForToolRun] = None, **kwargs: Any, ) -> str: """Execute the query, return the results or an error message.""" if cache := self._check_cache(tool_input): logger.debug("Found cached result for %s: %s", tool_input, cache) return cache try: logger.info("Running PBI Query Tool with input: %s", tool_input) query = self.llm_chain.predict( tool_input=tool_input, tables=self.powerbi.get_table_names(), schemas=self.powerbi.get_schemas(), examples=self.examples, ) except Exception as exc: # pylint: disable=broad-except self.session_cache[tool_input] = f"Error on call to LLM: {exc}" return self.session_cache[tool_input] if query == "I cannot answer this": self.session_cache[tool_input] = query return self.session_cache[tool_input] logger.info("PBI Query:\n%s", query) start_time = perf_counter() pbi_result = self.powerbi.run(command=query) end_time = perf_counter() logger.debug("PBI Result: %s", pbi_result) logger.debug(f"PBI Query duration: {end_time - start_time:0.6f}") result, error = self._parse_output(pbi_result) if error is not None and "TokenExpired" in error: self.session_cache[ tool_input ] = "Authentication token expired or invalid, please try reauthenticate." return self.session_cache[tool_input] iterations = kwargs.get("iterations", 0) if error and iterations < self.max_iterations: return self._run( tool_input=RETRY_RESPONSE.format( tool_input=tool_input, query=query, error=error ), run_manager=run_manager, iterations=iterations + 1, ) self.session_cache[tool_input] = ( result if result else BAD_REQUEST_RESPONSE.format(error=error) ) return self.session_cache[tool_input] async def _arun( self, tool_input: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, **kwargs: Any, ) -> str: """Execute the query, return the results or an error message.""" if cache := self._check_cache(tool_input): logger.debug("Found cached result for %s: %s", tool_input, cache) return f"{cache}, from cache, you have already asked this question." try: logger.info("Running PBI Query Tool with input: %s", tool_input) query = await self.llm_chain.apredict( tool_input=tool_input, tables=self.powerbi.get_table_names(), schemas=self.powerbi.get_schemas(), examples=self.examples, ) except Exception as exc: # pylint: disable=broad-except self.session_cache[tool_input] = f"Error on call to LLM: {exc}" return self.session_cache[tool_input] if query == "I cannot answer this": self.session_cache[tool_input] = query return self.session_cache[tool_input] logger.info("PBI Query: %s", query) start_time = perf_counter() pbi_result = await self.powerbi.arun(command=query) end_time = perf_counter() logger.debug("PBI Result: %s", pbi_result) logger.debug(f"PBI Query duration: {end_time - start_time:0.6f}") result, error = self._parse_output(pbi_result) if error is not None and ("TokenExpired" in error or "TokenError" in error): self.session_cache[ tool_input ] = "Authentication token expired or invalid, please try to reauthenticate or check the scope of the credential." # noqa: E501 return self.session_cache[tool_input] iterations = kwargs.get("iterations", 0) if error and iterations < self.max_iterations: return await self._arun( tool_input=RETRY_RESPONSE.format( tool_input=tool_input, query=query, error=error ), run_manager=run_manager, iterations=iterations + 1, ) self.session_cache[tool_input] = ( result if result else BAD_REQUEST_RESPONSE.format(error=error) ) return self.session_cache[tool_input] def _parse_output( self, pbi_result: Dict[str, Any] ) -> Tuple[Optional[str], Optional[Any]]: """Parse the output of the query to a markdown table.""" if "results" in pbi_result: rows = pbi_result["results"][0]["tables"][0]["rows"] if len(rows) == 0: logger.info("0 records in result, query was valid.") return ( None, "0 rows returned, this might be correct, but please validate if all filter values were correct?", # noqa: E501 ) result = json_to_md(rows) too_long, length = self._result_too_large(result) if too_long: return ( f"Result too large, please try to be more specific or use the `TOPN` function. The result is {length} tokens long, the limit is {self.output_token_limit} tokens.", # noqa: E501 None, ) return result, None if "error" in pbi_result: if ( "pbi.error" in pbi_result["error"] and "details" in pbi_result["error"]["pbi.error"] ): return None, pbi_result["error"]["pbi.error"]["details"][0]["detail"] return None, pbi_result["error"] return None, pbi_result def _result_too_large(self, result: str) -> Tuple[bool, int]: """Tokenize the output of the query.""" if self.tiktoken_model_name: tiktoken_ = _import_tiktoken() encoding = tiktoken_.encoding_for_model(self.tiktoken_model_name) length = len(encoding.encode(result)) logger.info("Result length: %s", length) return length > self.output_token_limit, length return False, 0
langchain.tools.powerbi.tool.QueryPowerBITool¶ class langchain.tools.powerbi.tool.QueryPowerBITool(*, name: str = 'query_powerbi', description: str = '\n    Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n\n    Example Input: "How many rows are in table1?"\n    ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, llm_chain: LLMChain, powerbi: PowerBIDataset, examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n', session_cache: Dict[str, Any] = None, max_iterations: int = 5, output_token_limit: int = 4000, tiktoken_model_name: Optional[str] = None)[source]¶ Bases: BaseTool Tool for querying a Power BI Dataset. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = '\n    Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n\n    Example Input: "How many rows are in table1?"\n    '¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n'¶ param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param llm_chain: langchain.chains.llm.LLMChain [Required]¶ param max_iterations: int = 5¶ param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'query_powerbi'¶ The unique name of the tool that clearly communicates its purpose. param output_token_limit: int = 4000¶ param powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param session_cache: Dict[str, Any] [Optional]¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param tiktoken_model_name: Optional[str] = None¶ param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. validator validate_llm_chain_input_variables  »  llm_chain[source]¶ Make sure the LLM chain has the correct input variables. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
Tool for querying a Power BI Dataset.
710a059e-7f30-4b6d-8b24-4ff09a5e2bf9
[ "logging", "time.perf_counter", "typing.Any", "typing.Dict", "typing.Optional", "typing.Tuple", "pydantic.Field", "pydantic.validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.chains.llm.LLMChain", "langchain.chat_models.openai._import_tiktoken", "langchain.tools.base.BaseTool", "langchain.tools.powerbi.prompt.BAD_REQUEST_RESPONSE", "langchain.tools.powerbi.prompt.DEFAULT_FEWSHOT_EXAMPLES", "langchain.tools.powerbi.prompt.RETRY_RESPONSE", "langchain.utilities.powerbi.PowerBIDataset", "langchain.utilities.powerbi.json_to_md" ]
langchain.tools.powerbi.tool.InfoPowerBITool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.InfoPowerBITool.html#langchain.tools.powerbi.tool.InfoPowerBITool
class InfoPowerBITool(BaseTool): """Tool for getting metadata about a PowerBI Dataset.""" name = "schema_powerbi" description = """ Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling list_tables_powerbi first! Example Input: "table1, table2, table3" """ # noqa: E501 powerbi: PowerBIDataset = Field(exclude=True) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True def _run( self, tool_input: str, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Get the schema for tables in a comma-separated list.""" return self.powerbi.get_table_info(tool_input.split(", ")) async def _arun( self, tool_input: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: return await self.powerbi.aget_table_info(tool_input.split(", "))
langchain.tools.powerbi.tool.InfoPowerBITool¶ class langchain.tools.powerbi.tool.InfoPowerBITool(*, name: str = 'schema_powerbi', description: str = '\n    Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n    Be sure that the tables actually exist by calling list_tables_powerbi first!\n\n    Example Input: "table1, table2, table3"\n    ', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, powerbi: PowerBIDataset)[source]¶ Bases: BaseTool Tool for getting metadata about a PowerBI Dataset. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = '\n    Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n    Be sure that the tables actually exist by calling list_tables_powerbi first!\n\n    Example Input: "table1, table2, table3"\n    '¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'schema_powerbi'¶ The unique name of the tool that clearly communicates its purpose. param powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
Tool for getting metadata about a PowerBI Dataset.
57dce085-11a5-401a-b714-b14a12bbcdec
[ "logging", "time.perf_counter", "typing.Any", "typing.Dict", "typing.Optional", "typing.Tuple", "pydantic.Field", "pydantic.validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.chains.llm.LLMChain", "langchain.chat_models.openai._import_tiktoken", "langchain.tools.base.BaseTool", "langchain.tools.powerbi.prompt.BAD_REQUEST_RESPONSE", "langchain.tools.powerbi.prompt.DEFAULT_FEWSHOT_EXAMPLES", "langchain.tools.powerbi.prompt.RETRY_RESPONSE", "langchain.utilities.powerbi.PowerBIDataset", "langchain.utilities.powerbi.json_to_md" ]
langchain.tools.powerbi.tool.ListPowerBITool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.powerbi.tool.ListPowerBITool.html#langchain.tools.powerbi.tool.ListPowerBITool
class ListPowerBITool(BaseTool): """Tool for getting tables names.""" name = "list_tables_powerbi" description = "Input is an empty string, output is a comma separated list of tables in the database." # noqa: E501 # pylint: disable=C0301 powerbi: PowerBIDataset = Field(exclude=True) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True def _run( self, tool_input: Optional[str] = None, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Get the names of the tables.""" return ", ".join(self.powerbi.get_table_names()) async def _arun( self, tool_input: Optional[str] = None, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Get the names of the tables.""" return ", ".join(self.powerbi.get_table_names())
langchain.tools.powerbi.tool.ListPowerBITool¶ class langchain.tools.powerbi.tool.ListPowerBITool(*, name: str = 'list_tables_powerbi', description: str = 'Input is an empty string, output is a comma separated list of tables in the database.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, powerbi: PowerBIDataset)[source]¶ Bases: BaseTool Tool for getting tables names. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Input is an empty string, output is a comma separated list of tables in the database.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'list_tables_powerbi'¶ The unique name of the tool that clearly communicates its purpose. param powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]¶ param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
Tool for getting tables names.
5d28a31a-9f84-4918-91c0-644e25c92215
[ "__future__.annotations", "json", "typing.TYPE_CHECKING", "typing.Any", "typing.Optional", "typing.Type", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.playwright.base.BaseBrowserTool", "langchain.tools.playwright.utils.aget_current_page", "langchain.tools.playwright.utils.get_current_page" ]
langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput.html#langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput
class ExtractHyperlinksToolInput(BaseModel): """Input for ExtractHyperlinksTool.""" absolute_urls: bool = Field( default=False, description="Return absolute URLs instead of relative URLs", )
langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput¶ class langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput(*, absolute_urls: bool = False)[source]¶ Bases: BaseModel Input for ExtractHyperlinksTool. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param absolute_urls: bool = False¶ Return absolute URLs instead of relative URLs
Input for ExtractHyperlinksTool.
59d0ca56-df6c-4f65-95fd-e6c2031efcac
[ "__future__.annotations", "json", "typing.TYPE_CHECKING", "typing.Any", "typing.Optional", "typing.Type", "pydantic.BaseModel", "pydantic.Field", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.playwright.base.BaseBrowserTool", "langchain.tools.playwright.utils.aget_current_page", "langchain.tools.playwright.utils.get_current_page" ]
langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool.html#langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool
class ExtractHyperlinksTool(BaseBrowserTool): """Extract all hyperlinks on the page.""" name: str = "extract_hyperlinks" description: str = "Extract all hyperlinks on the current webpage" args_schema: Type[BaseModel] = ExtractHyperlinksToolInput @root_validator def check_bs_import(cls, values: dict) -> dict: """Check that the arguments are valid.""" try: from bs4 import BeautifulSoup # noqa: F401 except ImportError: raise ValueError( "The 'beautifulsoup4' package is required to use this tool." " Please install it with 'pip install beautifulsoup4'." ) return values @staticmethod def scrape_page(page: Any, html_content: str, absolute_urls: bool) -> str: from urllib.parse import urljoin from bs4 import BeautifulSoup # Parse the HTML content with BeautifulSoup soup = BeautifulSoup(html_content, "lxml") # Find all the anchor elements and extract their href attributes anchors = soup.find_all("a") if absolute_urls: base_url = page.url links = [urljoin(base_url, anchor.get("href", "")) for anchor in anchors] else: links = [anchor.get("href", "") for anchor in anchors] # Return the list of links as a JSON string return json.dumps(links) def _run( self, absolute_urls: bool = False, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" if self.sync_browser is None: raise ValueError(f"Synchronous browser not provided to {self.name}") page = get_current_page(self.sync_browser) html_content = page.content() return self.scrape_page(page, html_content, absolute_urls) async def _arun( self, absolute_urls: bool = False, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" if self.async_browser is None: raise ValueError(f"Asynchronous browser not provided to {self.name}") page = await aget_current_page(self.async_browser) html_content = await page.content() return self.scrape_page(page, html_content, absolute_urls)
langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool¶ class langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksTool(*, name: str = 'extract_hyperlinks', description: str = 'Extract all hyperlinks on the current webpage', args_schema: ~typing.Type[~pydantic.main.BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>, return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]¶ Bases: BaseBrowserTool Extract all hyperlinks on the page. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>¶ Pydantic model class to validate and parse the tool’s input arguments. param async_browser: Optional['AsyncBrowser'] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Extract all hyperlinks on the current webpage'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'extract_hyperlinks'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param sync_browser: Optional['SyncBrowser'] = None¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator check_bs_import  »  all fields[source]¶ Check that the arguments are valid. classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → BaseBrowserTool¶ Instantiate the tool. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. static scrape_page(page: Any, html_content: str, absolute_urls: bool) → str[source]¶ validator validate_browser_provided  »  all fields¶ Check that the arguments are valid. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Extract all hyperlinks on the page.
33d405b2-7406-4c3a-8b70-d2d7020f88d8
[ "__future__.annotations", "typing.Optional", "typing.Type", "pydantic.BaseModel", "pydantic.root_validator", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.playwright.base.BaseBrowserTool", "langchain.tools.playwright.utils.aget_current_page", "langchain.tools.playwright.utils.get_current_page" ]
langchain.tools.playwright.extract_text.ExtractTextTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.extract_text.ExtractTextTool.html#langchain.tools.playwright.extract_text.ExtractTextTool
class ExtractTextTool(BaseBrowserTool): name: str = "extract_text" description: str = "Extract all the text on the current webpage" args_schema: Type[BaseModel] = BaseModel @root_validator def check_acheck_bs_importrgs(cls, values: dict) -> dict: """Check that the arguments are valid.""" try: from bs4 import BeautifulSoup # noqa: F401 except ImportError: raise ValueError( "The 'beautifulsoup4' package is required to use this tool." " Please install it with 'pip install beautifulsoup4'." ) return values def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str: """Use the tool.""" # Use Beautiful Soup since it's faster than looping through the elements from bs4 import BeautifulSoup if self.sync_browser is None: raise ValueError(f"Synchronous browser not provided to {self.name}") page = get_current_page(self.sync_browser) html_content = page.content() # Parse the HTML content with BeautifulSoup soup = BeautifulSoup(html_content, "lxml") return " ".join(text for text in soup.stripped_strings) async def _arun( self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: """Use the tool.""" if self.async_browser is None: raise ValueError(f"Asynchronous browser not provided to {self.name}") # Use Beautiful Soup since it's faster than looping through the elements from bs4 import BeautifulSoup page = await aget_current_page(self.async_browser) html_content = await page.content() # Parse the HTML content with BeautifulSoup soup = BeautifulSoup(html_content, "lxml") return " ".join(text for text in soup.stripped_strings)
langchain.tools.playwright.extract_text.ExtractTextTool¶ class langchain.tools.playwright.extract_text.ExtractTextTool(*, name: str = 'extract_text', description: str = 'Extract all the text on the current webpage', args_schema: ~typing.Type[~pydantic.main.BaseModel] = <class 'pydantic.main.BaseModel'>, return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]¶ Bases: BaseBrowserTool Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>¶ Pydantic model class to validate and parse the tool’s input arguments. param async_browser: Optional['AsyncBrowser'] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Extract all the text on the current webpage'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'extract_text'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param sync_browser: Optional['SyncBrowser'] = None¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator check_acheck_bs_importrgs  »  all fields[source]¶ Check that the arguments are valid. classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → BaseBrowserTool¶ Instantiate the tool. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. validator validate_browser_provided  »  all fields¶ Check that the arguments are valid. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Create a new model by parsing and validating input data from keyword arguments.
d267e6df-3144-4d2a-9e33-ed8f20b06aa1
[ "__future__.annotations", "typing.Optional", "typing.Type", "pydantic.BaseModel", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.playwright.base.BaseBrowserTool", "langchain.tools.playwright.utils.aget_current_page", "langchain.tools.playwright.utils.get_current_page" ]
langchain.tools.playwright.current_page.CurrentWebPageTool
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.current_page.CurrentWebPageTool.html#langchain.tools.playwright.current_page.CurrentWebPageTool
class CurrentWebPageTool(BaseBrowserTool): name: str = "current_webpage" description: str = "Returns the URL of the current page" args_schema: Type[BaseModel] = BaseModel def _run( self, run_manager: Optional[CallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" if self.sync_browser is None: raise ValueError(f"Synchronous browser not provided to {self.name}") page = get_current_page(self.sync_browser) return str(page.url) async def _arun( self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool.""" if self.async_browser is None: raise ValueError(f"Asynchronous browser not provided to {self.name}") page = await aget_current_page(self.async_browser) return str(page.url)
langchain.tools.playwright.current_page.CurrentWebPageTool¶ class langchain.tools.playwright.current_page.CurrentWebPageTool(*, name: str = 'current_webpage', description: str = 'Returns the URL of the current page', args_schema: ~typing.Type[~pydantic.main.BaseModel] = <class 'pydantic.main.BaseModel'>, return_direct: bool = False, verbose: bool = False, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, handle_tool_error: ~typing.Optional[~typing.Union[bool, str, ~typing.Callable[[~langchain.tools.base.ToolException], str]]] = False, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]¶ Bases: BaseBrowserTool Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>¶ Pydantic model class to validate and parse the tool’s input arguments. param async_browser: Optional['AsyncBrowser'] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Returns the URL of the current page'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'current_webpage'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param sync_browser: Optional['SyncBrowser'] = None¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → BaseBrowserTool¶ Instantiate the tool. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. validator validate_browser_provided  »  all fields¶ Check that the arguments are valid. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
Create a new model by parsing and validating input data from keyword arguments.
3d8a7db0-de2f-41fc-8ced-b97fc2daa5d4
[ "__future__.annotations", "typing.Optional", "typing.Type", "pydantic.BaseModel", "pydantic.Field", "langchain.callbacks.manager.AsyncCallbackManagerForToolRun", "langchain.callbacks.manager.CallbackManagerForToolRun", "langchain.tools.playwright.base.BaseBrowserTool", "langchain.tools.playwright.utils.aget_current_page", "langchain.tools.playwright.utils.get_current_page" ]
langchain.tools.playwright.navigate.NavigateToolInput
Class
https://api.python.langchain.com/en/latest/tools/langchain.tools.playwright.navigate.NavigateToolInput.html#langchain.tools.playwright.navigate.NavigateToolInput
class NavigateToolInput(BaseModel): """Input for NavigateToolInput.""" url: str = Field(..., description="url to navigate to")
langchain.tools.playwright.navigate.NavigateToolInput¶ class langchain.tools.playwright.navigate.NavigateToolInput(*, url: str)[source]¶ Bases: BaseModel Input for NavigateToolInput. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param url: str [Required]¶ url to navigate to
Input for NavigateToolInput.