id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
81099a94f427-227
for the full catalog. attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] async acombine_docs(docs, callbacks=None, **kwargs)[source] Stuff all documents into one prompt and pass to LLM. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-228
kwargs (Any) – Return type Tuple[str, dict] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str combine_docs(docs, callbacks=None, **kwargs)[source] Stuff all documents into one prompt and pass to LLM. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str]
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-229
return_only_outputs (bool) – Return type Dict[str, str] prompt_length(docs, **kwargs)[source] Get the prompt length by formatting the prompt. Parameters docs (List[langchain.schema.Document]) – kwargs (Any) – Return type Optional[int] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-230
property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.MapRerankDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_variable_name, rank_key, answer_key, metadata_keys=None, return_intermediate_steps=False)[source] Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain Combining documents by mapping a chain over them, then reranking results. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – input_key (str) – output_key (str) – llm_chain (langchain.chains.llm.LLMChain) – document_variable_name (str) – rank_key (str) – answer_key (str) – metadata_keys (Optional[List[str]]) – return_intermediate_steps (bool) – Return type None attribute answer_key: str [Required] Key in output of llm_chain to return as answer. attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-231
Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute document_variable_name: str [Required] The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided. attribute llm_chain: LLMChain [Required] Chain to apply to each document individually. attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute metadata_keys: Optional[List[str]] = None attribute rank_key: str [Required] Key in output of llm_chain to rank on. attribute return_intermediate_steps: bool = False attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param.
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-232
only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] async acombine_docs(docs, callbacks=None, **kwargs)[source] Combine documents in a map rerank manner. Combine by mapping first chain over all documents, then reranking the results. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-233
tags (Optional[List[str]]) – kwargs (Any) – Return type str combine_docs(docs, callbacks=None, **kwargs)[source] Combine documents in a map rerank manner. Combine by mapping first chain over all documents, then reranking the results. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] prompt_length(docs, **kwargs) Return the prompt length given the documents passed in. Returns None if the method does not depend on the prompt length. Parameters docs (List[langchain.schema.Document]) – kwargs (Any) – Return type Optional[int] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-234
kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.MapReduceDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, combine_document_chain, collapse_document_chain=None, document_variable_name, return_intermediate_steps=False)[source] Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain Combining documents by mapping a chain over them, then combining results. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-235
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – input_key (str) – output_key (str) – llm_chain (langchain.chains.llm.LLMChain) – combine_document_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) – collapse_document_chain (Optional[langchain.chains.combine_documents.base.BaseCombineDocumentsChain]) – document_variable_name (str) – return_intermediate_steps (bool) – Return type None attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute collapse_document_chain: Optional[BaseCombineDocumentsChain] = None Chain to use to collapse intermediary results if needed. If None, will use the combine_document_chain. attribute combine_document_chain: BaseCombineDocumentsChain [Required] Chain to use to combine results of applying llm_chain to documents. attribute document_variable_name: str [Required] The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided. attribute llm_chain: LLMChain [Required] Chain to apply to each document individually. attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-236
Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. attribute return_intermediate_steps: bool = False Return the results of the map steps in the output. attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-237
include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any] async acombine_docs(docs, callbacks=None, **kwargs)[source] Combine documents in a map reduce manner. Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents). Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str combine_docs(docs, token_max=3000, callbacks=None, **kwargs)[source] Combine documents in a map reduce manner. Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents). Parameters docs (List[langchain.schema.Document]) – token_max (int) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-238
docs (List[langchain.schema.Document]) – token_max (int) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] prompt_length(docs, **kwargs) Return the prompt length given the documents passed in. Returns None if the method does not depend on the prompt length. Parameters docs (List[langchain.schema.Document]) – kwargs (Any) – Return type Optional[int] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”)
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-239
Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.chains.RefineDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', initial_llm_chain, refine_llm_chain, document_variable_name, initial_response_name, document_prompt=None, return_intermediate_steps=False)[source] Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain Combine documents by doing a first pass and then refining on more documents. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – input_key (str) – output_key (str) –
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-240
input_key (str) – output_key (str) – initial_llm_chain (langchain.chains.llm.LLMChain) – refine_llm_chain (langchain.chains.llm.LLMChain) – document_variable_name (str) – initial_response_name (str) – document_prompt (langchain.prompts.base.BasePromptTemplate) – return_intermediate_steps (bool) – Return type None attribute callback_manager: Optional[BaseCallbackManager] = None Deprecated, use callbacks instead. attribute callbacks: Callbacks = None Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. attribute document_prompt: BasePromptTemplate [Optional] Prompt to use to format each document. attribute document_variable_name: str [Required] The variable name in the initial_llm_chain to put the documents in. If only one variable in the initial_llm_chain, this need not be provided. attribute initial_llm_chain: LLMChain [Required] LLM chain to use on initial document. attribute initial_response_name: str [Required] The variable name to format the initial response in when refining. attribute memory: Optional[BaseMemory] = None Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-241
There are many different types of memory - please see memory docs for the full catalog. attribute refine_llm_chain: LLMChain [Required] LLM chain to use when refining. attribute return_intermediate_steps: bool = False Return the results of the refine steps in the output. attribute tags: Optional[List[str]] = None Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. attribute verbose: bool [Optional] Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False) Run the logic of this chain and add to output if desired. Parameters inputs (Union[Dict[str, Any], Any]) – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs (bool) – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info (bool) – Whether to include run info in the response. Defaults to False. tags (Optional[List[str]]) – Return type Dict[str, Any]
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-242
tags (Optional[List[str]]) – Return type Dict[str, Any] async acombine_docs(docs, callbacks=None, **kwargs)[source] Combine by mapping first chain over all, then stuffing into final chain. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] apply(input_list, callbacks=None) Call the chain on all inputs in the list. Parameters input_list (List[Dict[str, Any]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Return type List[Dict[str, str]] async arun(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str combine_docs(docs, callbacks=None, **kwargs)[source] Combine by mapping first chain over all, then stuffing into final chain. Parameters docs (List[langchain.schema.Document]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type Tuple[str, dict] dict(**kwargs) Return dictionary representation of chain. Parameters kwargs (Any) – Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-243
Return type Dict prep_inputs(inputs) Validate and prep inputs. Parameters inputs (Union[Dict[str, Any], Any]) – Return type Dict[str, str] prep_outputs(inputs, outputs, return_only_outputs=False) Validate and prep outputs. Parameters inputs (Dict[str, str]) – outputs (Dict[str, str]) – return_only_outputs (bool) – Return type Dict[str, str] prompt_length(docs, **kwargs) Return the prompt length given the documents passed in. Returns None if the method does not depend on the prompt length. Parameters docs (List[langchain.schema.Document]) – kwargs (Any) – Return type Optional[int] run(*args, callbacks=None, tags=None, **kwargs) Run the chain as text in, text out or multiple variables, text out. Parameters args (Any) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type str save(file_path) Save the chain. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the chain to. Return type None Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) to_json() Return type Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented] to_json_not_implemented() Return type langchain.load.serializable.SerializedNotImplemented property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the
https://api.python.langchain.com/en/latest/modules/chains.html
81099a94f427-244
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/chains.html
2dbde574187e-0
Base classes Common schema objects. langchain.schema.get_buffer_string(messages, human_prefix='Human', ai_prefix='AI')[source] Get buffer string of messages. Parameters messages (List[langchain.schema.BaseMessage]) – human_prefix (str) – ai_prefix (str) – Return type str class langchain.schema.AgentAction(tool, tool_input, log)[source] Bases: object Agent’s action to take. Parameters tool (str) – tool_input (Union[str, dict]) – log (str) – Return type None class langchain.schema.AgentFinish(return_values, log)[source] Bases: NamedTuple Agent’s return value. Parameters return_values (dict) – log (str) – return_values: dict Alias for field number 0 log: str Alias for field number 1 count(value, /) Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /) Return first index of value. Raises ValueError if the value is not present. class langchain.schema.Generation(*, text, generation_info=None)[source] Bases: langchain.load.serializable.Serializable Output of a single generation. Parameters text (str) – generation_info (Optional[Dict[str, Any]]) – Return type None attribute generation_info: Optional[Dict[str, Any]] = None Raw generation info response from the provider attribute text: str [Required] Generated text output. classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-2
exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-3
property lc_serializable: bool This class is LangChain serializable. class langchain.schema.BaseMessage(*, content, additional_kwargs=None)[source] Bases: langchain.load.serializable.Serializable Message object. Parameters content (str) – additional_kwargs (dict) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-4
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-5
constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. abstract property type: str Type of the message, used for serialization. class langchain.schema.HumanMessage(*, content, additional_kwargs=None, example=False)[source] Bases: langchain.schema.BaseMessage Type of message that is spoken by the human. Parameters content (str) – additional_kwargs (dict) – example (bool) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-6
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-7
models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. property type: str Type of the message, used for serialization. class langchain.schema.AIMessage(*, content, additional_kwargs=None, example=False)[source] Bases: langchain.schema.BaseMessage Type of message that is spoken by the AI. Parameters content (str) – additional_kwargs (dict) – example (bool) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-8
Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-9
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. property type: str Type of the message, used for serialization. class langchain.schema.SystemMessage(*, content, additional_kwargs=None)[source] Bases: langchain.schema.BaseMessage Type of message that is a system message. Parameters content (str) – additional_kwargs (dict) – Return type None classmethod construct(_fields_set=None, **values)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-10
Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-11
exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-12
property lc_serializable: bool This class is LangChain serializable. property type: str Type of the message, used for serialization. class langchain.schema.FunctionMessage(*, content, additional_kwargs=None, name)[source] Bases: langchain.schema.BaseMessage Parameters content (str) – additional_kwargs (dict) – name (str) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-13
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-14
constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. property type: str Type of the message, used for serialization. class langchain.schema.ChatMessage(*, content, additional_kwargs=None, role)[source] Bases: langchain.schema.BaseMessage Type of message with arbitrary speaker. Parameters content (str) – additional_kwargs (dict) – role (str) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-15
the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-16
Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. property type: str Type of the message, used for serialization. langchain.schema.messages_to_dict(messages)[source] Convert messages to dict. Parameters messages (List[langchain.schema.BaseMessage]) – List of messages to convert. Returns List of dicts. Return type List[dict] langchain.schema.messages_from_dict(messages)[source] Convert messages from dict. Parameters messages (List[dict]) – List of messages (dicts) to convert. Returns List of messages (BaseMessages). Return type List[langchain.schema.BaseMessage] class langchain.schema.ChatGeneration(*, text='', generation_info=None, message)[source] Bases: langchain.schema.Generation Output of a single generation. Parameters text (str) – generation_info (Optional[Dict[str, Any]]) – message (langchain.schema.BaseMessage) – Return type None
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-17
message (langchain.schema.BaseMessage) – Return type None attribute generation_info: Optional[Dict[str, Any]] = None Raw generation info response from the provider attribute text: str = '' Generated text output. classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-18
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-19
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool This class is LangChain serializable. class langchain.schema.RunInfo(*, run_id)[source] Bases: pydantic.main.BaseModel Class that contains all relevant metadata for a Run. Parameters run_id (uuid.UUID) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-20
self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-21
Parameters localns (Any) – Return type None class langchain.schema.ChatResult(*, generations, llm_output=None)[source] Bases: pydantic.main.BaseModel Class that contains all relevant information for a Chat Result. Parameters generations (List[langchain.schema.ChatGeneration]) – llm_output (Optional[dict]) – Return type None attribute generations: List[langchain.schema.ChatGeneration] [Required] List of the things generated. attribute llm_output: Optional[dict] = None For arbitrary LLM provider specific output. classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-22
self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-23
Parameters localns (Any) – Return type None class langchain.schema.LLMResult(*, generations, llm_output=None, run=None)[source] Bases: pydantic.main.BaseModel Class that contains all relevant information for an LLM Result. Parameters generations (List[List[langchain.schema.Generation]]) – llm_output (Optional[dict]) – run (Optional[List[langchain.schema.RunInfo]]) – Return type None attribute generations: List[List[langchain.schema.Generation]] [Required] List of the things generated. This is List[List[]] because each input could have multiple generations. attribute llm_output: Optional[dict] = None For arbitrary LLM provider specific output. attribute run: Optional[List[langchain.schema.RunInfo]] = None Run metadata. classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-24
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny flatten()[source] Flatten generations into a single list. Return type List[langchain.schema.LLMResult] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-25
exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None class langchain.schema.PromptValue[source] Bases: langchain.load.serializable.Serializable, abc.ABC Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-26
self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode abstract to_messages()[source] Return prompt as messages. Return type List[langchain.schema.BaseMessage] abstract to_string()[source] Return prompt as string.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-27
abstract to_string()[source] Return prompt as string. Return type str classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.schema.BaseMemory[source] Bases: langchain.load.serializable.Serializable, abc.ABC Base interface for memory in chains. Return type None abstract clear()[source] Clear memory contents. Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-28
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-29
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode abstract load_memory_variables(inputs)[source] Return key-value pairs given the text input to the chain. If None, return all memories Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] abstract save_context(inputs, outputs)[source] Save the context of this model run to memory. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-30
property lc_serializable: bool Return whether or not the class is serializable. abstract property memory_variables: List[str] Input keys this memory class will load dynamically. class langchain.schema.BaseChatMessageHistory[source] Bases: abc.ABC Base interface for chat message history See ChatMessageHistory for default implementation. add_user_message(message)[source] Add a user message to the store Parameters message (str) – Return type None add_ai_message(message)[source] Add an AI message to the store Parameters message (str) – Return type None add_message(message)[source] Add a self-created message to the store Parameters message (langchain.schema.BaseMessage) – Return type None abstract clear()[source] Remove all messages from the store Return type None class langchain.schema.Document(*, page_content, metadata=None)[source] Bases: langchain.load.serializable.Serializable Interface for interacting with a document. Parameters page_content (str) – metadata (dict) – Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-31
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-32
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.schema.BaseRetriever[source] Bases: abc.ABC Base interface for retrievers. abstract get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] abstract async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-33
Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] langchain.schema.Memory alias of langchain.schema.BaseMemory class langchain.schema.BaseLLMOutputParser[source] Bases: langchain.load.serializable.Serializable, abc.ABC, Generic[langchain.schema.T] Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-34
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – Return type DictStrAny json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode abstract parse_result(result)[source] Parse LLM Result. Parameters result (List[langchain.schema.Generation]) – Return type langchain.schema.T classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-35
Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.schema.BaseOutputParser[source] Bases: langchain.schema.BaseLLMOutputParser, abc.ABC, Generic[langchain.schema.T] Class to parse the output of an LLM call. Output parsers help structure language model responses. Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-36
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)[source] Return dictionary representation of output parser. Parameters kwargs (Any) – Return type Dict get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode abstract parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type langchain.schema.T
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-37
Returns structured output Return type langchain.schema.T parse_result(result)[source] Parse LLM Result. Parameters result (List[langchain.schema.Generation]) – Return type langchain.schema.T parse_with_prompt(completion, prompt)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt (langchain.schema.PromptValue) – prompt value Returns structured output Return type Any classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.schema.NoOpOutputParser[source] Bases: langchain.schema.BaseOutputParser[str] Output parser that just returns the text as is. Return type None classmethod construct(_fields_set=None, **values)
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-38
Return type None classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return dictionary representation of output parser. Parameters kwargs (Any) – Return type Dict get_format_instructions() Instructions on how the LLM output should be formatted. Return type str json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-39
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type str parse_result(result) Parse LLM Result. Parameters result (List[langchain.schema.Generation]) – Return type langchain.schema.T parse_with_prompt(completion, prompt) Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt (langchain.schema.PromptValue) – prompt value Returns structured output Return type Any classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-40
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. exception langchain.schema.OutputParserException(error, observation=None, llm_output=None, send_to_llm=False)[source] Bases: ValueError Exception that output parsers should raise to signify a parsing error. This exists to differentiate parsing errors from other code or execution errors that also may arise inside the output parser. OutputParserExceptions will be available to catch and handle in ways to fix the parsing error, while other errors will be raised. Parameters error (Any) – observation (str | None) – llm_output (str | None) – send_to_llm (bool) – add_note() Exception.add_note(note) – add a note to the exception with_traceback() Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. class langchain.schema.BaseDocumentTransformer[source] Bases: abc.ABC Base interface for transforming documents. abstract transform_documents(documents, **kwargs)[source] Transform a list of documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] abstract async atransform_documents(documents, **kwargs)[source] Asynchronously transform a list of documents.
https://api.python.langchain.com/en/latest/modules/base_classes.html
2dbde574187e-41
Asynchronously transform a list of documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/base_classes.html
f09a5cce5e87-0
Chat Models class langchain.chat_models.ChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)[source] Bases: langchain.chat_models.base.BaseChatModel Wrapper around OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.chat_models import ChatOpenAI openai = ChatOpenAI(model_name="gpt-3.5-turbo") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – temperature (float) – model_kwargs (Dict[str, Any]) – openai_api_key (Optional[str]) – openai_api_base (Optional[str]) – openai_organization (Optional[str]) – openai_proxy (Optional[str]) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – max_retries (int) – streaming (bool) –
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-1
max_retries (int) – streaming (bool) – n (int) – max_tokens (Optional[int]) – tiktoken_model_name (Optional[str]) – Return type None attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute max_tokens: Optional[int] = None Maximum number of tokens to generate. attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute model_name: str = 'gpt-3.5-turbo' (alias 'model') Model name to use. attribute n: int = 1 Number of chat completions to generate for each prompt. attribute openai_api_base: Optional[str] = None attribute openai_api_key: Optional[str] = None Base URL path for API requests, leave blank if not using a proxy or service emulator. attribute openai_organization: Optional[str] = None attribute openai_proxy: Optional[str] = None attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout for requests to OpenAI completion API. Default is 600 seconds. attribute streaming: bool = False Whether to stream the results or not. attribute temperature: float = 0.7 What sampling temperature to use. attribute tiktoken_model_name: Optional[str] = None The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-2
be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. completion_with_retry(**kwargs)[source] Use tenacity to retry the completion call. Parameters kwargs (Any) – Return type Any get_num_tokens_from_messages(messages)[source] Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package. Official documentation: https://github.com/openai/openai-cookbook/blob/ main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text)[source] Get the tokens present in the text with tiktoken package. Parameters text (str) – Return type List[int] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-3
property lc_serializable: bool Return whether or not the class is serializable. class langchain.chat_models.AzureChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key='', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source] Bases: langchain.chat_models.openai.ChatOpenAI Wrapper around Azure OpenAI Chat Completion API. To use this class you must have a deployed model on Azure OpenAI. Use deployment_name in the constructor to refer to the β€œModel deployment name” in the Azure portal. In addition, you should have the openai python package installed, and the following environment variables set or passed in constructor in lower case: - OPENAI_API_TYPE (default: azure) - OPENAI_API_KEY - OPENAI_API_BASE - OPENAI_API_VERSION - OPENAI_PROXY For exmaple, if you have gpt-35-turbo deployed, with the deployment name 35-turbo-dev, the constructor should look like: AzureChatOpenAI( deployment_name="35-turbo-dev", openai_api_version="2023-03-15-preview", ) Be aware the API version may change. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-4
Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – temperature (float) – model_kwargs (Dict[str, Any]) – openai_api_key (str) – openai_api_base (str) – openai_organization (str) – openai_proxy (str) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – max_retries (int) – streaming (bool) – n (int) – max_tokens (Optional[int]) – tiktoken_model_name (Optional[str]) – deployment_name (str) – openai_api_type (str) – openai_api_version (str) – Return type None attribute deployment_name: str = '' attribute openai_api_base: str = '' attribute openai_api_key: str = '' Base URL path for API requests, leave blank if not using a proxy or service emulator. attribute openai_api_type: str = 'azure' attribute openai_api_version: str = '' attribute openai_organization: str = '' attribute openai_proxy: str = '' class langchain.chat_models.FakeListChatModel(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source] Bases: langchain.chat_models.base.SimpleChatModel Fake ChatModel for testing purposes. Parameters
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-5
Fake ChatModel for testing purposes. Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – responses (List) – i (int) – Return type None attribute i: int = 0 attribute responses: List [Required] class langchain.chat_models.PromptLayerChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source] Bases: langchain.chat_models.openai.ChatOpenAI Wrapper around OpenAI Chat large language models and PromptLayer. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerChatOpenAI adds to optional Parameters pl_tags (Optional[List[str]]) – List of strings to tag the request with. return_pl_id (Optional[bool]) – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. cache (Optional[bool]) –
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-6
Generation object. cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – temperature (float) – model_kwargs (Dict[str, Any]) – openai_api_key (Optional[str]) – openai_api_base (Optional[str]) – openai_organization (Optional[str]) – openai_proxy (Optional[str]) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – max_retries (int) – streaming (bool) – n (int) – max_tokens (Optional[int]) – tiktoken_model_name (Optional[str]) – Return type None Example from langchain.chat_models import PromptLayerChatOpenAI openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo") attribute pl_tags: Optional[List[str]] = None attribute return_pl_id: Optional[bool] = False class langchain.chat_models.ChatAnthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source] Bases: langchain.chat_models.base.BaseChatModel, langchain.llms.anthropic._AnthropicCommon
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-7
Wrapper around Anthropic’s large language model. To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example import anthropic from langchain.llms import Anthropic model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key") Parameters client (Any) – model (str) – max_tokens_to_sample (int) – temperature (Optional[float]) – top_k (Optional[int]) – top_p (Optional[float]) – streaming (bool) – default_request_timeout (Optional[Union[float, Tuple[float, float]]]) – anthropic_api_url (Optional[str]) – anthropic_api_key (Optional[str]) – HUMAN_PROMPT (Optional[str]) – AI_PROMPT (Optional[str]) – count_tokens (Optional[Callable[[str], int]]) – cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – Return type None get_num_tokens(text)[source] Calculate number of tokens. Parameters text (str) – Return type int property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-8
property lc_serializable: bool Return whether or not the class is serializable. class langchain.chat_models.ChatGooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='models/chat-bison-001', google_api_key=None, temperature=None, top_p=None, top_k=None, n=1)[source] Bases: langchain.chat_models.base.BaseChatModel, pydantic.main.BaseModel Wrapper around Google’s PaLM Chat API. To use you must have the google.generativeai Python package installed and either: The GOOGLE_API_KEY` environment varaible set with your API key, or Pass your API key using the google_api_key kwarg to the ChatGoogle constructor. Example from langchain.chat_models import ChatGooglePalm chat = ChatGooglePalm() Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model_name (str) – google_api_key (Optional[str]) – temperature (Optional[float]) – top_p (Optional[float]) – top_k (Optional[int]) – n (int) – Return type None attribute google_api_key: Optional[str] = None attribute model_name: str = 'models/chat-bison-001' Model name to use. attribute n: int = 1 Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated.
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-9
not return the full n completions if duplicates are generated. attribute temperature: Optional[float] = None Run inference with this temperature. Must by in the closed interval [0.0, 1.0]. attribute top_k: Optional[int] = None Decode using top-k sampling: consider the set of top_k most probable tokens. Must be positive. attribute top_p: Optional[float] = None Decode using nucleus sampling: consider the smallest set of tokens whose probability sum is at least top_p. Must be in the closed interval [0.0, 1.0]. class langchain.chat_models.ChatVertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='chat-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5)[source] Bases: langchain.llms.vertexai._VertexAICommon, langchain.chat_models.base.BaseChatModel Wrapper around Vertex AI large language models. Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (_LanguageModel) – model_name (str) – temperature (float) – max_output_tokens (int) – top_p (float) – top_k (int) – stop (Optional[List[str]]) – project (Optional[str]) – location (str) – credentials (Any) –
https://api.python.langchain.com/en/latest/modules/chat_models.html
f09a5cce5e87-10
location (str) – credentials (Any) – request_parallelism (int) – Return type None attribute model_name: str = 'chat-bison' Model name to use.
https://api.python.langchain.com/en/latest/modules/chat_models.html
45b46cec0462-0
LLMs Wrappers on top of large language models APIs. class langchain.llms.AI21(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='j2-jumbo-instruct', temperature=0.7, maxTokens=256, minTokens=0, topP=1.0, presencePenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults=1, logitBias=None, ai21_api_key=None, stop=None, base_url=None)[source] Bases: langchain.llms.base.LLM Wrapper around AI21 large language models. To use, you should have the environment variable AI21_API_KEY set with your API key. Example from langchain.llms import AI21 ai21 = AI21(model="j2-jumbo-instruct") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – temperature (float) – maxTokens (int) – minTokens (int) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-1
maxTokens (int) – minTokens (int) – topP (float) – presencePenalty (langchain.llms.ai21.AI21PenaltyData) – countPenalty (langchain.llms.ai21.AI21PenaltyData) – frequencyPenalty (langchain.llms.ai21.AI21PenaltyData) – numResults (int) – logitBias (Optional[Dict[str, float]]) – ai21_api_key (Optional[str]) – stop (Optional[List[str]]) – base_url (Optional[str]) – Return type None attribute base_url: Optional[str] = None Base url to use, if None decides based on model name. attribute countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True) Penalizes repeated tokens according to count. attribute frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True) Penalizes repeated tokens according to frequency. attribute logitBias: Optional[Dict[str, float]] = None Adjust the probability of specific tokens being generated. attribute maxTokens: int = 256 The maximum number of tokens to generate in the completion. attribute minTokens: int = 0 The minimum number of tokens to generate in the completion. attribute model: str = 'j2-jumbo-instruct' Model name to use.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-2
Model name to use. attribute numResults: int = 1 How many completions to generate for each prompt. attribute presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True) Penalizes repeated tokens. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.7 What sampling temperature to use. attribute topP: float = 1.0 Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-3
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-4
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-5
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-6
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-7
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.AlephAlpha(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='luminous-base', maximum_tokens=64, temperature=0.0, top_k=0, top_p=0.0, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalties_include_prompt=False, use_multiplicative_presence_penalty=False, penalty_bias=None, penalty_exceptions=None, penalty_exceptions_include_stop_sequences=None, best_of=None, n=1, logit_bias=None, log_probs=None, tokens=False, disable_optimizations=False, minimum_tokens=0, echo=False, use_multiplicative_frequency_penalty=False, sequence_penalty=0.0, sequence_penalty_min_length=2, use_multiplicative_sequence_penalty=False, completion_bias_inclusion=None, completion_bias_inclusion_first_token_only=False, completion_bias_exclusion=None, completion_bias_exclusion_first_token_only=False, contextual_control_threshold=None, control_log_additive=True, repetition_penalties_include_completion=True, raw_completion=False, aleph_alpha_api_key=None, stop_sequences=None)[source] Bases: langchain.llms.base.LLM Wrapper around Aleph Alpha large language models. To use, you should have the aleph_alpha_client python package installed, and the environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass it as a named parameter to the constructor. Parameters are explained more in depth here: https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10 Example from langchain.llms import AlephAlpha
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-8
Example from langchain.llms import AlephAlpha aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (Optional[str]) – maximum_tokens (int) – temperature (float) – top_k (int) – top_p (float) – presence_penalty (float) – frequency_penalty (float) – repetition_penalties_include_prompt (Optional[bool]) – use_multiplicative_presence_penalty (Optional[bool]) – penalty_bias (Optional[str]) – penalty_exceptions (Optional[List[str]]) – penalty_exceptions_include_stop_sequences (Optional[bool]) – best_of (Optional[int]) – n (int) – logit_bias (Optional[Dict[int, float]]) – log_probs (Optional[int]) – tokens (Optional[bool]) – disable_optimizations (Optional[bool]) – minimum_tokens (Optional[int]) – echo (bool) – use_multiplicative_frequency_penalty (bool) – sequence_penalty (float) – sequence_penalty_min_length (int) – use_multiplicative_sequence_penalty (bool) – completion_bias_inclusion (Optional[Sequence[str]]) – completion_bias_inclusion_first_token_only (bool) – completion_bias_exclusion (Optional[Sequence[str]]) – completion_bias_exclusion_first_token_only (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-9
completion_bias_exclusion_first_token_only (bool) – contextual_control_threshold (Optional[float]) – control_log_additive (Optional[bool]) – repetition_penalties_include_completion (bool) – raw_completion (bool) – aleph_alpha_api_key (Optional[str]) – stop_sequences (Optional[List[str]]) – Return type None attribute aleph_alpha_api_key: Optional[str] = None API key for Aleph Alpha API. attribute best_of: Optional[int] = None returns the one with the β€œbest of” results (highest log probability per token) attribute completion_bias_exclusion_first_token_only: bool = False Only consider the first token for the completion_bias_exclusion. attribute contextual_control_threshold: Optional[float] = None If set to None, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-None value, control parameters are also applied to similar tokens. attribute control_log_additive: Optional[bool] = True True: apply control by adding the log(control_factor) to attention scores. False: (attention_scores - - attention_scores.min(-1)) * control_factor attribute echo: bool = False Echo the prompt in the completion. attribute frequency_penalty: float = 0.0 Penalizes repeated tokens according to frequency. attribute log_probs: Optional[int] = None Number of top log probabilities to be returned for each generated token. attribute logit_bias: Optional[Dict[int, float]] = None The logit bias allows to influence the likelihood of generating tokens. attribute maximum_tokens: int = 64 The maximum number of tokens to be generated.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-10
The maximum number of tokens to be generated. attribute minimum_tokens: Optional[int] = 0 Generate at least this number of tokens. attribute model: Optional[str] = 'luminous-base' Model name to use. attribute n: int = 1 How many completions to generate for each prompt. attribute penalty_bias: Optional[str] = None Penalty bias for the completion. attribute penalty_exceptions: Optional[List[str]] = None List of strings that may be generated without penalty, regardless of other penalty settings attribute penalty_exceptions_include_stop_sequences: Optional[bool] = None Should stop_sequences be included in penalty_exceptions. attribute presence_penalty: float = 0.0 Penalizes repeated tokens. attribute raw_completion: bool = False Force the raw completion of the model to be returned. attribute repetition_penalties_include_completion: bool = True Flag deciding whether presence penalty or frequency penalty are updated from the completion. attribute repetition_penalties_include_prompt: Optional[bool] = False Flag deciding whether presence penalty or frequency penalty are updated from the prompt. attribute stop_sequences: Optional[List[str]] = None Stop sequences to use. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.0 A non-negative float that tunes the degree of randomness in generation. attribute tokens: Optional[bool] = False return tokens of completion. attribute top_k: int = 0 Number of most likely tokens to consider at each step. attribute top_p: float = 0.0 Total probability mass of tokens to consider at each step.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-11
Total probability mass of tokens to consider at each step. attribute use_multiplicative_presence_penalty: Optional[bool] = False Flag deciding whether presence penalty is applied multiplicatively (True) or additively (False). attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-12
Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-13
Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-14
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-15
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.AmazonAPIGateway(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url, model_kwargs=None, content_handler=<langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>)[source] Bases: langchain.llms.base.LLM Wrapper around custom Amazon API Gateway Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – api_url (str) – model_kwargs (Optional[Dict]) – content_handler (langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway) – Return type None attribute api_url: str [Required] API Gateway URL attribute content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object> The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-16
output transform functions to handle formats between LLM and the endpoint. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-17
Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-18
Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-19
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-20
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Anthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source] Bases: langchain.llms.base.LLM, langchain.llms.anthropic._AnthropicCommon Wrapper around Anthropic’s large language models. To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example import anthropic from langchain.llms import Anthropic model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key") # Simplest invocation, automatically wrapped with HUMAN_PROMPT # and AI_PROMPT. response = model("What are the biggest risks facing humanity?") # Or if you want to use the chat mode, build a few-shot-prompt, or
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-21
# Or if you want to use the chat mode, build a few-shot-prompt, or # put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT: raw_prompt = "What are the biggest risks facing humanity?" prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}" response = model(prompt) Parameters client (Any) – model (str) – max_tokens_to_sample (int) – temperature (Optional[float]) – top_k (Optional[int]) – top_p (Optional[float]) – streaming (bool) – default_request_timeout (Optional[Union[float, Tuple[float, float]]]) – anthropic_api_url (Optional[str]) – anthropic_api_key (Optional[str]) – HUMAN_PROMPT (Optional[str]) – AI_PROMPT (Optional[str]) – count_tokens (Optional[Callable[[str], int]]) – cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – Return type None attribute default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout for requests to Anthropic Completion API. Default is 600 seconds. attribute max_tokens_to_sample: int = 256 Denotes the number of tokens to predict per generation. attribute model: str = 'claude-v1' Model name to use. attribute streaming: bool = False Whether to stream the results. attribute tags: Optional[List[str]] = None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-22
Whether to stream the results. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: Optional[float] = None A non-negative float that tunes the degree of randomness in generation. attribute top_k: Optional[int] = None Number of most likely tokens to consider at each step. attribute top_p: Optional[float] = None Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-23
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-24
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text)[source] Calculate number of tokens. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-25
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt, stop=None)[source] Call Anthropic completion_stream and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt (str) – The prompt to pass into the model.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-26
Parameters prompt (str) – The prompt to pass into the model. stop (Optional[List[str]]) – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from Anthropic. Return type Generator Example prompt = "Write a poem about a stream." prompt = f"\n\nHuman: {prompt}\n\nAssistant:" generator = anthropic.stream(prompt) for token in generator: yield token classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Anyscale(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_kwargs=None, anyscale_service_url=None, anyscale_service_route=None, anyscale_service_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around Anyscale Services. To use, you should have the environment variable ANYSCALE_SERVICE_URL, ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale Service, or pass it as a named parameter to the constructor. Example
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-27
Service, or pass it as a named parameter to the constructor. Example from langchain.llms import Anyscale anyscale = Anyscale(anyscale_service_url="SERVICE_URL", anyscale_service_route="SERVICE_ROUTE", anyscale_service_token="SERVICE_TOKEN") # Use Ray for distributed processing import ray prompt_list=[] @ray.remote def send_query(llm, prompt): resp = llm(prompt) return resp futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list] results = ray.get(futures) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model_kwargs (Optional[dict]) – anyscale_service_url (Optional[str]) – anyscale_service_route (Optional[str]) – anyscale_service_token (Optional[str]) – Return type None attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. Reserved for future use attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-28
kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/modules/llms.html