id
stringlengths
14
15
text
stringlengths
13
2.7k
source
stringlengths
60
181
526bd25b2d7c-0
langchain_core.prompt_values.ChatPromptValue¶ class langchain_core.prompt_values.ChatPromptValue[source]¶ Bases: PromptValue Chat prompt value. A type of a prompt value that is built from messages. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param messages: Sequence[langchain_core.messages.base.BaseMessage] [Required]¶ List of messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.ChatPromptValue.html
526bd25b2d7c-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.ChatPromptValue.html
526bd25b2d7c-2
The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ to_messages() → List[BaseMessage][source]¶ Return prompt as a list of messages. to_string() → str[source]¶ Return prompt as string. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.ChatPromptValue.html
625fcd75bab5-0
langchain_core.prompt_values.PromptValue¶ class langchain_core.prompt_values.PromptValue[source]¶ Bases: Serializable, ABC Base abstract class for inputs to any language model. PromptValues can be converted to both LLM (pure text-generation) inputs andChatModel inputs. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.PromptValue.html
625fcd75bab5-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool[source]¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.PromptValue.html
625fcd75bab5-2
The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ abstract to_messages() → List[BaseMessage][source]¶ Return prompt as a list of Messages. abstract to_string() → str[source]¶ Return prompt value as string. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.PromptValue.html
62aff3683a68-0
langchain_core.outputs.chat_result.ChatResult¶ class langchain_core.outputs.chat_result.ChatResult[source]¶ Bases: BaseModel Class that contains all results for a single chat model call. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generations: List[langchain_core.outputs.chat_generation.ChatGeneration] [Required]¶ List of the chat generations. This is a List because an input can have multiple candidate generations. param llm_output: Optional[dict] = None¶ For arbitrary LLM provider specific output. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html
62aff3683a68-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html
62aff3683a68-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html
15bf51547055-0
langchain_core.outputs.chat_generation.ChatGeneration¶ class langchain_core.outputs.chat_generation.ChatGeneration[source]¶ Bases: Generation A single chat generation output. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generation_info: Optional[Dict[str, Any]] = None¶ Raw response from the provider. May include things like the reason for finishing or token log probabilities. param message: BaseMessage [Required]¶ The message output by the chat model. param text: str = ''¶ SHOULD NOT BE SET DIRECTLY The text contents of the output message. param type: Literal['ChatGeneration'] = 'ChatGeneration'¶ Type is used exclusively for serialization purposes. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html
15bf51547055-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html
15bf51547055-2
The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html
d29a73310fb9-0
langchain_core.outputs.generation.GenerationChunk¶ class langchain_core.outputs.generation.GenerationChunk[source]¶ Bases: Generation A Generation chunk, which can be concatenated with other Generation chunks. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generation_info: Optional[Dict[str, Any]] = None¶ Raw response from the provider. May include things like the reason for finishing or token log probabilities. param text: str [Required]¶ Generated text output. param type: Literal['Generation'] = 'Generation'¶ Type is used exclusively for serialization purposes. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html
d29a73310fb9-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html
d29a73310fb9-2
The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html
f7e54f2a08cb-0
langchain_core.outputs.generation.Generation¶ class langchain_core.outputs.generation.Generation[source]¶ Bases: Serializable A single text generation output. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generation_info: Optional[Dict[str, Any]] = None¶ Raw response from the provider. May include things like the reason for finishing or token log probabilities. param text: str [Required]¶ Generated text output. param type: Literal['Generation'] = 'Generation'¶ Type is used exclusively for serialization purposes. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html
f7e54f2a08cb-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool[source]¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html
f7e54f2a08cb-2
The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html
1f921fec5adf-0
langchain_core.outputs.run_info.RunInfo¶ class langchain_core.outputs.run_info.RunInfo[source]¶ Bases: BaseModel Class that contains metadata for a single execution of a Chain or model. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param run_id: uuid.UUID [Required]¶ A unique identifier for the model or chain run. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html
1f921fec5adf-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html
1f921fec5adf-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html
a4d7e785531a-0
langchain_core.outputs.llm_result.LLMResult¶ class langchain_core.outputs.llm_result.LLMResult[source]¶ Bases: BaseModel Class that contains all results for a batched LLM call. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generations: List[List[langchain_core.outputs.generation.Generation]] [Required]¶ List of generated outputs. This is a List[List[]] because each input could have multiple candidate generations. param llm_output: Optional[dict] = None¶ Arbitrary LLM provider-specific output. param run: Optional[List[langchain_core.outputs.run_info.RunInfo]] = None¶ List of metadata info for model call for each input. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html
a4d7e785531a-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. flatten() → List[LLMResult][source]¶ Flatten generations into a single list. Unpack List[List[Generation]] -> List[LLMResult] where each returned LLMResultcontains only a single Generation. If token usage information is available, it is kept only for the LLMResult corresponding to the top-choice Generation, to avoid over-counting of token usage downstream. Returns List of LLMResults where each returned LLMResult contains a singleGeneration. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html
a4d7e785531a-2
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using LLMResult¶ Ollama Async callbacks
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html
bb4b1e446c2f-0
langchain_core.outputs.chat_generation.ChatGenerationChunk¶ class langchain_core.outputs.chat_generation.ChatGenerationChunk[source]¶ Bases: ChatGeneration A ChatGeneration chunk, which can be concatenated with otherChatGeneration chunks. message¶ The message chunk output by the chat model. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param generation_info: Optional[Dict[str, Any]] = None¶ Raw response from the provider. May include things like the reason for finishing or token log probabilities. param message: BaseMessageChunk [Required]¶ The message output by the chat model. param text: str = ''¶ SHOULD NOT BE SET DIRECTLY The text contents of the output message. param type: Literal['ChatGenerationChunk'] = 'ChatGenerationChunk'¶ Type is used exclusively for serialization purposes. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html
bb4b1e446c2f-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes.
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html
bb4b1e446c2f-2
A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html
cf8811248d69-0
langchain_experimental.prompt_injection_identifier.hugging_face_identifier.PromptInjectionException¶ class langchain_experimental.prompt_injection_identifier.hugging_face_identifier.PromptInjectionException(message: str = 'Prompt injection attack detected', score: float = 1.0)[source]¶
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.PromptInjectionException.html
69437dca9ecb-0
langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier¶ class langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier[source]¶ Bases: BaseTool Tool that uses HF model to detect prompt injection attacks. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'A wrapper around HuggingFace Prompt Injection security model. Useful for when you need to ensure that prompt is free of injection attacks. Input should be any message from the user.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param injection_label: str = 'INJECTION'¶ Label for prompt injection detection model. Defaults to INJECTION. Value depends on the model used. Label of the injection for prompt injection detection. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param model: Union[Pipeline, str, None] [Optional]¶
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-1
param model: Union[Pipeline, str, None] [Optional]¶ Model to use for prompt injection detection. Can be specified as transformers Pipeline or string. String should correspond to themodel name of a text-classification transformers model. Defaults to laiyer/deberta-v3-base-prompt-injection model. param name: str = 'hugging_face_injection_identifier'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param threshold: float = 0.5¶ Threshold for prompt injection detection. Defaults to 0.5. Threshold for prompt injection detection. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-2
e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-3
Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-4
bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-5
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ The tool’s input schema. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-6
Returns A pydantic model that can be used to validate output. invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) → Any¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-7
by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) → Any¶ Run the tool. classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-8
Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-9
added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property args: dict¶ property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property is_single_input: bool¶ Whether the tool only accepts a single input. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
69437dca9ecb-10
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
https://api.python.langchain.com/en/latest/prompt_injection_identifier/langchain_experimental.prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier.html
b24cb3beb24e-0
langchain.hub.pull¶ langchain.hub.pull(owner_repo_commit: str, *, api_url: Optional[str] = None, api_key: Optional[str] = None) → Any[source]¶ Pulls an object from the hub and returns it as a LangChain object. Parameters owner_repo_commit – The full name of the repo to pull from in the format of owner/repo:commit_hash. api_url – The URL of the LangChain Hub API. Defaults to the hosted API service if you have an api key set, or a localhost instance if not. api_key – The API key to use to authenticate with the LangChain Hub API.
https://api.python.langchain.com/en/latest/hub/langchain.hub.pull.html
f13d83bfc576-0
langchain.hub.push¶ langchain.hub.push(repo_full_name: str, object: Any, *, api_url: Optional[str] = None, api_key: Optional[str] = None, parent_commit_hash: Optional[str] = 'latest', new_repo_is_public: bool = True, new_repo_description: str = '') → str[source]¶ Pushes an object to the hub and returns the URL it can be viewed at in a browser. Parameters repo_full_name – The full name of the repo to push to in the format of owner/repo. object – The LangChain to serialize and push to the hub. api_url – The URL of the LangChain Hub API. Defaults to the hosted API service if you have an api key set, or a localhost instance if not. api_key – The API key to use to authenticate with the LangChain Hub API. parent_commit_hash – The commit hash of the parent commit to push to. Defaults to the latest commit automatically. new_repo_is_public – Whether the repo should be public. Defaults to True (Public by default). new_repo_description – The description of the repo. Defaults to an empty string.
https://api.python.langchain.com/en/latest/hub/langchain.hub.push.html
b34b561bf3d8-0
langchain_experimental.generative_agents.generative_agent.GenerativeAgent¶ class langchain_experimental.generative_agents.generative_agent.GenerativeAgent[source]¶ Bases: BaseModel An Agent as a character with memory and innate characteristics. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param age: Optional[int] = None¶ The optional age of the character. param daily_summaries: List[str] [Optional]¶ Summary of the events in the plan that the agent took. param last_refreshed: datetime.datetime [Optional]¶ The last time the character’s summary was regenerated. param llm: langchain_core.language_models.base.BaseLanguageModel [Required]¶ The underlying language model. param memory: langchain_experimental.generative_agents.memory.GenerativeAgentMemory [Required]¶ The memory object that combines relevance, recency, and ‘importance’. param name: str [Required]¶ The character’s name. param status: str [Required]¶ The traits of the character you wish not to change. param summary: str = ''¶ Stateful self-summary generated via reflection on the character’s memory. param summary_refresh_seconds: int = 3600¶ How frequently to re-generate the summary. param traits: str = 'N/A'¶ Permanent traits to ascribe to the character. param verbose: bool = False¶ chain(prompt: PromptTemplate) → LLMChain[source]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
b34b561bf3d8-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ generate_dialogue_response(observation: str, now: Optional[datetime] = None) → Tuple[bool, str][source]¶ React to a given observation. generate_reaction(observation: str, now: Optional[datetime] = None) → Tuple[bool, str][source]¶ React to a given observation. get_full_header(force_refresh: bool = False, now: Optional[datetime] = None) → str[source]¶
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
b34b561bf3d8-2
Return a full header of the agent’s status, summary, and current time. get_summary(force_refresh: bool = False, now: Optional[datetime] = None) → str[source]¶ Return a descriptive summary of the agent. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ summarize_related_memories(observation: str) → str[source]¶ Summarize memories that are most relevant to an observation. classmethod update_forward_refs(**localns: Any) → None¶
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
b34b561bf3d8-3
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html
0c5975d29525-0
langchain_experimental.generative_agents.memory.GenerativeAgentMemory¶ class langchain_experimental.generative_agents.memory.GenerativeAgentMemory[source]¶ Bases: BaseMemory Memory for the generative agent. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param add_memory_key: str = 'add_memory'¶ param aggregate_importance: float = 0.0¶ Track the sum of the ‘importance’ of recent memories. Triggers reflection when it reaches reflection_threshold. param current_plan: List[str] = []¶ The current plan of the agent. param importance_weight: float = 0.15¶ How much weight to assign the memory importance. param llm: langchain_core.language_models.base.BaseLanguageModel [Required]¶ The core language model. param max_tokens_limit: int = 1200¶ param memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]¶ The retriever to fetch related memories. param most_recent_memories_key: str = 'most_recent_memories'¶ param most_recent_memories_token_key: str = 'recent_memories_token'¶ param now_key: str = 'now'¶ param queries_key: str = 'queries'¶ param reflecting: bool = False¶ param reflection_threshold: Optional[float] = None¶ When aggregate_importance exceeds reflection_threshold, stop to reflect. param relevant_memories_key: str = 'relevant_memories'¶ param relevant_memories_simple_key: str = 'relevant_memories_simple'¶ param verbose: bool = False¶ add_memories(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
0c5975d29525-1
Add an observations or memories to the agent’s memory. add_memory(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶ Add an observation or memory to the agent’s memory. chain(prompt: PromptTemplate) → LLMChain[source]¶ clear() → None[source]¶ Clear memory contents. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
0c5975d29525-2
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. fetch_memories(observation: str, now: Optional[datetime] = None) → List[Document][source]¶ Fetch related memories. format_memories_detail(relevant_memories: List[Document]) → str[source]¶ format_memories_simple(relevant_memories: List[Document]) → str[source]¶ classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶ Return key-value pairs given the text input to the chain.
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
0c5975d29525-3
Return key-value pairs given the text input to the chain. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ pause_to_reflect(now: Optional[datetime] = None) → List[str][source]¶ Reflect on recent observations and generate ‘insights’. save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) → None[source]¶ Save the context of this model run to memory. classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property memory_variables: List[str]¶ Input keys this memory class will load dynamically.
https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html
9a4a89266f55-0
langchain_experimental.fallacy_removal.base.FallacyChain¶ class langchain_experimental.fallacy_removal.base.FallacyChain[source]¶ Bases: Chain Chain for applying logical fallacy evaluations, modeled after Constitutional AI and in same format, but applying logical fallacies as generalized rules to remove in output Example from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain_experimental.fallacy import FallacyChain from langchain_experimental.fallacy_removal.models import LogicalFallacy llm = OpenAI() qa_prompt = PromptTemplate( template="Q: {question} A:", input_variables=["question"], ) qa_chain = LLMChain(llm=llm, prompt=qa_prompt) fallacy_chain = FallacyChain.from_llm( llm=llm, chain=qa_chain, logical_fallacies=[ LogicalFallacy( fallacy_critique_request="Tell if this answer meets criteria.", fallacy_revision_request= "Give an answer that meets better criteria.", ) ], ) fallacy_chain.run(question="How do I know if the earth is round?") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-1
Each custom chain can optionally call additional callback methods, see Callback docs for full details. param chain: LLMChain [Required]¶ param fallacy_critique_chain: LLMChain [Required]¶ param fallacy_revision_chain: LLMChain [Required]¶ param logical_fallacies: List[LogicalFallacy] [Required]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param return_intermediate_steps: bool = False¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose().
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-2
accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-3
Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-4
Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-5
directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-6
step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-7
To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-8
new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...}
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-9
classmethod from_llm(llm: BaseLanguageModel, chain: LLMChain, fallacy_critique_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['fallacy_critique_request', 'input_prompt', 'output_from_model'], examples=[{'input_prompt': "If everyone says the Earth is round,         how do I know that's correct?", 'output_from_model': 'The earth is round because your         teacher says it is', 'fallacy_critique_request': 'Identify specific ways in        which the model’s previous response had a logical fallacy.         Also point out potential logical fallacies in the human’s         questions and responses. Examples of logical fallacies         include but are not limited to ad hominem, ad populum,         appeal to emotion and false causality.', 'fallacy_critique': 'This statement contains the logical         fallacy of Ad Verecundiam or Appeal to Authority. It is         a fallacy because it asserts something to be true purely         based on the authority of the source making the claim,         without any actual evidence to support it.  Fallacy         Critique Needed', 'fallacy_revision': 'The earth is round based on         evidence from observations of its curvature from high         altitudes, photos from space showing its spherical shape,         circumnavigation, and the fact that we see its rounded         shadow on the moon during lunar eclipses.'}, {'input_prompt': 'Should we invest more in our school         music program? After all, studies show students         involved in music perform better academically.', 'output_from_model': "I don't think we should invest         more in the music program. Playing the piccolo won't         teach someone better math skills.", 'fallacy_critique_request': 'Identify specific ways         in which the model’s previous response had a logical         fallacy. Also point out potential logical fallacies         in the human’s questions and responses. Examples of
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-10
Also point out potential logical fallacies         in the human’s questions and responses. Examples of         logical fallacies include but are not limited to ad         homimem, ad populum, appeal to emotion and false         causality.', 'fallacy_critique': 'This answer commits the division         fallacy by rejecting the argument based on assuming         capabilities true of the parts (playing an instrument         like piccolo) also apply to the whole         (the full music program). The answer focuses only on         part of the music program rather than considering it         as a whole.  Fallacy Critique Needed.', 'fallacy_revision': 'While playing an instrument may         teach discipline, more evidence is needed on whether         music education courses improve critical thinking         skills across subjects before determining if increased         investment in the whole music program is warranted.'}], example_prompt=PromptTemplate(input_variables=['fallacy_critique', 'fallacy_critique_request', 'input_prompt', 'output_from_model'], template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}'), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an     AI assistant. If there is no material critique of the     model output, append to the end of the Fallacy Critique:     'No fallacy critique needed.' If there is material     critique     of the model output, append to the end of the Fallacy     Critique: 'Fallacy Critique needed.'"), fallacy_revision_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['fallacy_critique', 'fallacy_critique_request',
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-11
= FewShotPromptTemplate(input_variables=['fallacy_critique', 'fallacy_critique_request', 'fallacy_revision_request', 'input_prompt', 'output_from_model'], examples=[{'input_prompt': "If everyone says the Earth is round,         how do I know that's correct?", 'output_from_model': 'The earth is round because your         teacher says it is', 'fallacy_critique_request': 'Identify specific ways in        which the model’s previous response had a logical fallacy.         Also point out potential logical fallacies in the human’s         questions and responses. Examples of logical fallacies         include but are not limited to ad hominem, ad populum,         appeal to emotion and false causality.', 'fallacy_critique': 'This statement contains the logical         fallacy of Ad Verecundiam or Appeal to Authority. It is         a fallacy because it asserts something to be true purely         based on the authority of the source making the claim,         without any actual evidence to support it.  Fallacy         Critique Needed', 'fallacy_revision_request': 'Please rewrite the model         response to remove all logical fallacies, and to         politely point out any logical fallacies from the         human.', 'fallacy_revision': 'The earth is round based on         evidence from observations of its curvature from high         altitudes, photos from space showing its spherical shape,         circumnavigation, and the fact that we see its rounded         shadow on the moon during lunar eclipses.'}, {'input_prompt': 'Should we invest more in our school         music program? After all, studies show students         involved in music perform better academically.', 'output_from_model': "I don't think we should invest         more in the music program. Playing the piccolo won't         teach someone better math skills.", 'fallacy_critique_request': 'Identify specific ways         in which the model’s previous response had a logical
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-12
'Identify specific ways         in which the model’s previous response had a logical         fallacy. Also point out potential logical fallacies         in the human’s questions and responses. Examples of         logical fallacies include but are not limited to ad         homimem, ad populum, appeal to emotion and false         causality.', 'fallacy_critique': 'This answer commits the division         fallacy by rejecting the argument based on assuming         capabilities true of the parts (playing an instrument         like piccolo) also apply to the whole         (the full music program). The answer focuses only on         part of the music program rather than considering it         as a whole.  Fallacy Critique Needed.', 'fallacy_revision_request': 'Please rewrite the model         response to remove all logical fallacies, and to         politely point out any logical fallacies from the human.', 'fallacy_revision': 'While playing an instrument may         teach discipline, more evidence is needed on whether         music education courses improve critical thinking         skills across subjects before determining if increased         investment in the whole music program is warranted.'}], example_prompt=PromptTemplate(input_variables=['fallacy_critique', 'fallacy_critique_request', 'input_prompt', 'output_from_model'], template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}'), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nFallacy Critique Request: {fallacy_critique_request}\n\nFallacy Critique: {fallacy_critique}\n\nIf the fallacy critique does not identify anything worth changing, ignore the Fallacy Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the fallacy critique does identify something worth changing, please revise the model response based on
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-13
the fallacy critique does identify something worth changing, please revise the model response based on the Fallacy Revision Request.\n\nFallacy Revision Request: {fallacy_revision_request}\n\nFallacy Revision:', example_separator='\n === \n', prefix='Below is a conversation between a human and     an AI assistant.'), **kwargs: Any) → FallacyChain[source]¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-14
Create a chain from an LLM. classmethod from_orm(obj: Any) → Model¶ classmethod get_fallacies(names: Optional[List[str]] = None) → List[LogicalFallacy][source]¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-15
Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-16
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-17
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-18
Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-19
on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_keys: List[str]¶ Input keys. property input_schema: Type[pydantic.main.BaseModel]¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
9a4a89266f55-20
Input keys. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶ Output keys. property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.base.FallacyChain.html
6d63c4970639-0
langchain_experimental.fallacy_removal.models.LogicalFallacy¶ class langchain_experimental.fallacy_removal.models.LogicalFallacy[source]¶ Bases: BaseModel Class for a logical fallacy. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param fallacy_critique_request: str [Required]¶ param fallacy_revision_request: str [Required]¶ param name: str = 'Logical Fallacy'¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html
6d63c4970639-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html
6d63c4970639-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/fallacy_removal/langchain_experimental.fallacy_removal.models.LogicalFallacy.html
bb9dc6156c6a-0
langchain_experimental.pal_chain.base.PALValidation¶ class langchain_experimental.pal_chain.base.PALValidation(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶ Initialize a PALValidation instance. Parameters solution_expression_name (str) – Name of the expected solution expression. If passed, solution_expression_type must be passed as well. solution_expression_type (str) – AST type of the expected solution expression. If passed, solution_expression_name must be passed as well. Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION, PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE. allow_imports (bool) – Allow import statements. allow_command_exec (bool) – Allow using known command execution functions. Methods __init__([solution_expression_name, ...]) Initialize a PALValidation instance. __init__(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶ Initialize a PALValidation instance. Parameters solution_expression_name (str) – Name of the expected solution expression. If passed, solution_expression_type must be passed as well. solution_expression_type (str) – AST type of the expected solution expression. If passed, solution_expression_name must be passed as well. Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION, PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE. allow_imports (bool) – Allow import statements. allow_command_exec (bool) – Allow using known command execution functions.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALValidation.html
d60f8216078d-0
langchain_experimental.pal_chain.base.PALChain¶ class langchain_experimental.pal_chain.base.PALChain[source]¶ Bases: Chain Implements Program-Aided Language Models (PAL). This class implements the Program-Aided Language Models (PAL) for generating code solutions. PAL is a technique described in the paper “Program-Aided Language Models” (https://arxiv.org/pdf/2211.10435.pdf). Security note: This class implements an AI technique that generates and evaluatesPython code, which can be dangerous and requires a specially sandboxed environment to be safely used. While this class implements some basic guardrails by limiting available locals/globals and by parsing and inspecting the generated Python AST using PALValidation, those guardrails will not deter sophisticated attackers and are not a replacement for a proper sandbox. Do not use this class on untrusted inputs, with elevated permissions, or without consulting your security team about proper sandboxing! Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param code_validations: PALValidation [Optional]¶ Validations to perform on the generated code. param get_answer_expr: str = 'print(solution())'¶ Expression to use to get the answer from the generated code. param llm_chain: LLMChain [Required]¶
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-1
param llm_chain: LLMChain [Required]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param python_globals: Optional[Dict[str, Any]] = None¶ Python globals and locals to use when executing the generated code. param python_locals: Optional[Dict[str, Any]] = None¶ Python globals and locals to use when executing the generated code. param return_intermediate_steps: bool = False¶ Whether to return intermediate steps in the generated code. param stop: str = '\n\n'¶ Stop token to use when generating code. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param timeout: Optional[int] = 10¶ Timeout in seconds for the generated code to execute. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value,
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-2
will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-3
Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-4
Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-5
directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-6
step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-7
To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-8
new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_colored_object_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶ Load PAL from colored object prompt. Parameters llm (BaseLanguageModel) – The language model to use for generating code. Returns An instance of PALChain. Return type PALChain classmethod from_math_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶ Load PAL from math prompt. Parameters llm (BaseLanguageModel) – The language model to use for generating code. Returns An instance of PALChain. Return type PALChain classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-9
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable?
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-10
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-11
Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-12
these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-13
Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ classmethod validate_code(code: str, code_validations: PALValidation) → None[source]¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-14
type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
d60f8216078d-15
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
731cf885482e-0
langchain.indexes.vectorstore.VectorstoreIndexCreator¶ class langchain.indexes.vectorstore.VectorstoreIndexCreator[source]¶ Bases: BaseModel Logic for creating indexes. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param embedding: langchain_core.embeddings.Embeddings [Optional]¶ param text_splitter: langchain.text_splitter.TextSplitter [Optional]¶ param vectorstore_cls: Type[langchain_core.vectorstores.VectorStore] = <class 'langchain_community.vectorstores.chroma.Chroma'>¶ param vectorstore_kwargs: dict [Optional]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
731cf885482e-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. from_documents(documents: List[Document]) → VectorStoreIndexWrapper[source]¶ Create a vectorstore index from documents. from_loaders(loaders: List[BaseLoader]) → VectorStoreIndexWrapper[source]¶ Create a vectorstore index from loaders. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
731cf885482e-2
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using VectorstoreIndexCreator¶ Apify HuggingFace dataset Spreedly Image captions Figma Apify Dataset Iugu Stripe Modern Treasury Question Answering Multiple Retrieval Sources
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
0465d959d5b4-0
langchain.indexes.base.RecordManager¶ class langchain.indexes.base.RecordManager(namespace: str)[source]¶ An abstract base class representing the interface for a record manager. Initialize the record manager. Parameters namespace (str) – The namespace for the record manager. Methods __init__(namespace) Initialize the record manager. acreate_schema() Create the database schema for the record manager. adelete_keys(keys) Delete specified records from the database. aexists(keys) Check if the provided keys exist in the database. aget_time() Get the current server time as a high resolution timestamp! alist_keys(*[, before, after, group_ids, limit]) List records in the database based on the provided filters. aupdate(keys, *[, group_ids, time_at_least]) Upsert records into the database. create_schema() Create the database schema for the record manager. delete_keys(keys) Delete specified records from the database. exists(keys) Check if the provided keys exist in the database. get_time() Get the current server time as a high resolution timestamp! list_keys(*[, before, after, group_ids, limit]) List records in the database based on the provided filters. update(keys, *[, group_ids, time_at_least]) Upsert records into the database. __init__(namespace: str) → None[source]¶ Initialize the record manager. Parameters namespace (str) – The namespace for the record manager. abstract async acreate_schema() → None[source]¶ Create the database schema for the record manager. abstract async adelete_keys(keys: Sequence[str]) → None[source]¶ Delete specified records from the database. Parameters keys – A list of keys to delete.
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html
0465d959d5b4-1
Delete specified records from the database. Parameters keys – A list of keys to delete. abstract async aexists(keys: Sequence[str]) → List[bool][source]¶ Check if the provided keys exist in the database. Parameters keys – A list of keys to check. Returns A list of boolean values indicating the existence of each key. abstract async aget_time() → float[source]¶ Get the current server time as a high resolution timestamp! It’s important to get this from the server to ensure a monotonic clock, otherwise there may be data loss when cleaning up old documents! Returns The current server time as a float timestamp. abstract async alist_keys(*, before: Optional[float] = None, after: Optional[float] = None, group_ids: Optional[Sequence[str]] = None, limit: Optional[int] = None) → List[str][source]¶ List records in the database based on the provided filters. Parameters before – Filter to list records updated before this time. after – Filter to list records updated after this time. group_ids – Filter to list records with specific group IDs. limit – optional limit on the number of records to return. Returns A list of keys for the matching records. abstract async aupdate(keys: Sequence[str], *, group_ids: Optional[Sequence[Optional[str]]] = None, time_at_least: Optional[float] = None) → None[source]¶ Upsert records into the database. Parameters keys – A list of record keys to upsert. group_ids – A list of group IDs corresponding to the keys. time_at_least – if provided, updates should only happen if the updated_at field is at least this time. Raises ValueError – If the length of keys doesn’t match the length of group_ids. abstract create_schema() → None[source]¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html
0465d959d5b4-2
abstract create_schema() → None[source]¶ Create the database schema for the record manager. abstract delete_keys(keys: Sequence[str]) → None[source]¶ Delete specified records from the database. Parameters keys – A list of keys to delete. abstract exists(keys: Sequence[str]) → List[bool][source]¶ Check if the provided keys exist in the database. Parameters keys – A list of keys to check. Returns A list of boolean values indicating the existence of each key. abstract get_time() → float[source]¶ Get the current server time as a high resolution timestamp! It’s important to get this from the server to ensure a monotonic clock, otherwise there may be data loss when cleaning up old documents! Returns The current server time as a float timestamp. abstract list_keys(*, before: Optional[float] = None, after: Optional[float] = None, group_ids: Optional[Sequence[str]] = None, limit: Optional[int] = None) → List[str][source]¶ List records in the database based on the provided filters. Parameters before – Filter to list records updated before this time. after – Filter to list records updated after this time. group_ids – Filter to list records with specific group IDs. limit – optional limit on the number of records to return. Returns A list of keys for the matching records. abstract update(keys: Sequence[str], *, group_ids: Optional[Sequence[Optional[str]]] = None, time_at_least: Optional[float] = None) → None[source]¶ Upsert records into the database. Parameters keys – A list of record keys to upsert. group_ids – A list of group IDs corresponding to the keys. time_at_least – if provided, updates should only happen if the updated_at field is at least this time. Raises
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html
0465d959d5b4-3
updated_at field is at least this time. Raises ValueError – If the length of keys doesn’t match the length of group_ids.
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html
49296d880c7b-0
langchain_community.indexes.base.RecordManager¶ class langchain_community.indexes.base.RecordManager(namespace: str)[source]¶ An abstract base class representing the interface for a record manager. Initialize the record manager. Parameters namespace (str) – The namespace for the record manager. Methods __init__(namespace) Initialize the record manager. acreate_schema() Create the database schema for the record manager. adelete_keys(keys) Delete specified records from the database. aexists(keys) Check if the provided keys exist in the database. aget_time() Get the current server time as a high resolution timestamp! alist_keys(*[, before, after, group_ids, limit]) List records in the database based on the provided filters. aupdate(keys, *[, group_ids, time_at_least]) Upsert records into the database. create_schema() Create the database schema for the record manager. delete_keys(keys) Delete specified records from the database. exists(keys) Check if the provided keys exist in the database. get_time() Get the current server time as a high resolution timestamp! list_keys(*[, before, after, group_ids, limit]) List records in the database based on the provided filters. update(keys, *[, group_ids, time_at_least]) Upsert records into the database. __init__(namespace: str) → None[source]¶ Initialize the record manager. Parameters namespace (str) – The namespace for the record manager. abstract async acreate_schema() → None[source]¶ Create the database schema for the record manager. abstract async adelete_keys(keys: Sequence[str]) → None[source]¶ Delete specified records from the database. Parameters keys – A list of keys to delete.
https://api.python.langchain.com/en/latest/indexes/langchain_community.indexes.base.RecordManager.html
49296d880c7b-1
Delete specified records from the database. Parameters keys – A list of keys to delete. abstract async aexists(keys: Sequence[str]) → List[bool][source]¶ Check if the provided keys exist in the database. Parameters keys – A list of keys to check. Returns A list of boolean values indicating the existence of each key. abstract async aget_time() → float[source]¶ Get the current server time as a high resolution timestamp! It’s important to get this from the server to ensure a monotonic clock, otherwise there may be data loss when cleaning up old documents! Returns The current server time as a float timestamp. abstract async alist_keys(*, before: Optional[float] = None, after: Optional[float] = None, group_ids: Optional[Sequence[str]] = None, limit: Optional[int] = None) → List[str][source]¶ List records in the database based on the provided filters. Parameters before – Filter to list records updated before this time. after – Filter to list records updated after this time. group_ids – Filter to list records with specific group IDs. limit – optional limit on the number of records to return. Returns A list of keys for the matching records. abstract async aupdate(keys: Sequence[str], *, group_ids: Optional[Sequence[Optional[str]]] = None, time_at_least: Optional[float] = None) → None[source]¶ Upsert records into the database. Parameters keys – A list of record keys to upsert. group_ids – A list of group IDs corresponding to the keys. time_at_least – if provided, updates should only happen if the updated_at field is at least this time. Raises ValueError – If the length of keys doesn’t match the length of group_ids. abstract create_schema() → None[source]¶
https://api.python.langchain.com/en/latest/indexes/langchain_community.indexes.base.RecordManager.html
49296d880c7b-2
abstract create_schema() → None[source]¶ Create the database schema for the record manager. abstract delete_keys(keys: Sequence[str]) → None[source]¶ Delete specified records from the database. Parameters keys – A list of keys to delete. abstract exists(keys: Sequence[str]) → List[bool][source]¶ Check if the provided keys exist in the database. Parameters keys – A list of keys to check. Returns A list of boolean values indicating the existence of each key. abstract get_time() → float[source]¶ Get the current server time as a high resolution timestamp! It’s important to get this from the server to ensure a monotonic clock, otherwise there may be data loss when cleaning up old documents! Returns The current server time as a float timestamp. abstract list_keys(*, before: Optional[float] = None, after: Optional[float] = None, group_ids: Optional[Sequence[str]] = None, limit: Optional[int] = None) → List[str][source]¶ List records in the database based on the provided filters. Parameters before – Filter to list records updated before this time. after – Filter to list records updated after this time. group_ids – Filter to list records with specific group IDs. limit – optional limit on the number of records to return. Returns A list of keys for the matching records. abstract update(keys: Sequence[str], *, group_ids: Optional[Sequence[Optional[str]]] = None, time_at_least: Optional[float] = None) → None[source]¶ Upsert records into the database. Parameters keys – A list of record keys to upsert. group_ids – A list of group IDs corresponding to the keys. time_at_least – if provided, updates should only happen if the updated_at field is at least this time. Raises
https://api.python.langchain.com/en/latest/indexes/langchain_community.indexes.base.RecordManager.html