id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
81099a94f427-227 | for the full catalog.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Stuff all documents into one prompt and pass to LLM.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-228 | kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Stuff all documents into one prompt and pass to LLM.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str] | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-229 | return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)[source]ο
Get the prompt length by formatting the prompt.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-230 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MapRerankDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_variable_name, rank_key, answer_key, metadata_keys=None, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
llm_chain (langchain.chains.llm.LLMChain) β
document_variable_name (str) β
rank_key (str) β
answer_key (str) β
metadata_keys (Optional[List[str]]) β
return_intermediate_steps (bool) β
Return type
None
attribute answer_key: str [Required]ο
Key in output of llm_chain to return as answer.
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-231 | Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute document_variable_name: str [Required]ο
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
attribute llm_chain: LLMChain [Required]ο
Chain to apply to each document individually.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute metadata_keys: Optional[List[str]] = Noneο
attribute rank_key: str [Required]ο
Key in output of llm_chain to rank on.
attribute return_intermediate_steps: bool = Falseο
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param. | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-232 | only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map rerank manner.
Combine by mapping first chain over all documents, then reranking the results.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-233 | tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map rerank manner.
Combine by mapping first chain over all documents, then reranking the results.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-234 | kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.MapReduceDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, combine_document_chain, collapse_document_chain=None, document_variable_name, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combining documents by mapping a chain over them, then combining results.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-235 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β
llm_chain (langchain.chains.llm.LLMChain) β
combine_document_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) β
collapse_document_chain (Optional[langchain.chains.combine_documents.base.BaseCombineDocumentsChain]) β
document_variable_name (str) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute collapse_document_chain: Optional[BaseCombineDocumentsChain] = Noneο
Chain to use to collapse intermediary results if needed.
If None, will use the combine_document_chain.
attribute combine_document_chain: BaseCombineDocumentsChain [Required]ο
Chain to use to combine results of applying llm_chain to documents.
attribute document_variable_name: str [Required]ο
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
attribute llm_chain: LLMChain [Required]ο
Chain to apply to each document individually.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-236 | Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
attribute return_intermediate_steps: bool = Falseο
Return the results of the map steps in the output.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-237 | include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine documents in a map reduce manner.
Combine by mapping first chain over all documents, then reducing the results.
This reducing can be done recursively if needed (if there are many documents).
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, token_max=3000, callbacks=None, **kwargs)[source]ο
Combine documents in a map reduce manner.
Combine by mapping first chain over all documents, then reducing the results.
This reducing can be done recursively if needed (if there are many documents).
Parameters
docs (List[langchain.schema.Document]) β
token_max (int) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-238 | docs (List[langchain.schema.Document]) β
token_max (int) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ) | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-239 | Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chains.RefineDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', initial_llm_chain, refine_llm_chain, document_variable_name, initial_response_name, document_prompt=None, return_intermediate_steps=False)[source]ο
Bases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
input_key (str) β
output_key (str) β | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-240 | input_key (str) β
output_key (str) β
initial_llm_chain (langchain.chains.llm.LLMChain) β
refine_llm_chain (langchain.chains.llm.LLMChain) β
document_variable_name (str) β
initial_response_name (str) β
document_prompt (langchain.prompts.base.BasePromptTemplate) β
return_intermediate_steps (bool) β
Return type
None
attribute callback_manager: Optional[BaseCallbackManager] = Noneο
Deprecated, use callbacks instead.
attribute callbacks: Callbacks = Noneο
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
attribute document_prompt: BasePromptTemplate [Optional]ο
Prompt to use to format each document.
attribute document_variable_name: str [Required]ο
The variable name in the initial_llm_chain to put the documents in.
If only one variable in the initial_llm_chain, this need not be provided.
attribute initial_llm_chain: LLMChain [Required]ο
LLM chain to use on initial document.
attribute initial_response_name: str [Required]ο
The variable name to format the initial response in when refining.
attribute memory: Optional[BaseMemory] = Noneο
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-241 | There are many different types of memory - please see memory docs
for the full catalog.
attribute refine_llm_chain: LLMChain [Required]ο
LLM chain to use when refining.
attribute return_intermediate_steps: bool = Falseο
Return the results of the refine steps in the output.
attribute tags: Optional[List[str]] = Noneο
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
attribute verbose: bool [Optional]ο
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
async acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)ο
Run the logic of this chain and add to output if desired.
Parameters
inputs (Union[Dict[str, Any], Any]) β Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs (bool) β boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info (bool) β Whether to include run info in the response. Defaults
to False.
tags (Optional[List[str]]) β
Return type
Dict[str, Any] | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-242 | tags (Optional[List[str]]) β
Return type
Dict[str, Any]
async acombine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
apply(input_list, callbacks=None)ο
Call the chain on all inputs in the list.
Parameters
input_list (List[Dict[str, Any]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Return type
List[Dict[str, str]]
async arun(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
combine_docs(docs, callbacks=None, **kwargs)[source]ο
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs (List[langchain.schema.Document]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
Tuple[str, dict]
dict(**kwargs)ο
Return dictionary representation of chain.
Parameters
kwargs (Any) β
Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-243 | Return type
Dict
prep_inputs(inputs)ο
Validate and prep inputs.
Parameters
inputs (Union[Dict[str, Any], Any]) β
Return type
Dict[str, str]
prep_outputs(inputs, outputs, return_only_outputs=False)ο
Validate and prep outputs.
Parameters
inputs (Dict[str, str]) β
outputs (Dict[str, str]) β
return_only_outputs (bool) β
Return type
Dict[str, str]
prompt_length(docs, **kwargs)ο
Return the prompt length given the documents passed in.
Returns None if the method does not depend on the prompt length.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
Optional[int]
run(*args, callbacks=None, tags=None, **kwargs)ο
Run the chain as text in, text out or multiple variables, text out.
Parameters
args (Any) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
str
save(file_path)ο
Save the chain.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the chain to.
Return type
None
Example:
.. code-block:: python
chain.save(file_path=βpath/chain.yamlβ)
to_json()ο
Return type
Union[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]
to_json_not_implemented()ο
Return type
langchain.load.serializable.SerializedNotImplemented
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the | https://api.python.langchain.com/en/latest/modules/chains.html |
81099a94f427-244 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/chains.html |
2dbde574187e-0 | Base classesο
Common schema objects.
langchain.schema.get_buffer_string(messages, human_prefix='Human', ai_prefix='AI')[source]ο
Get buffer string of messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
human_prefix (str) β
ai_prefix (str) β
Return type
str
class langchain.schema.AgentAction(tool, tool_input, log)[source]ο
Bases: object
Agentβs action to take.
Parameters
tool (str) β
tool_input (Union[str, dict]) β
log (str) β
Return type
None
class langchain.schema.AgentFinish(return_values, log)[source]ο
Bases: NamedTuple
Agentβs return value.
Parameters
return_values (dict) β
log (str) β
return_values: dictο
Alias for field number 0
log: strο
Alias for field number 1
count(value, /)ο
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)ο
Return first index of value.
Raises ValueError if the value is not present.
class langchain.schema.Generation(*, text, generation_info=None)[source]ο
Bases: langchain.load.serializable.Serializable
Output of a single generation.
Parameters
text (str) β
generation_info (Optional[Dict[str, Any]]) β
Return type
None
attribute generation_info: Optional[Dict[str, Any]] = Noneο
Raw generation info response from the provider
attribute text: str [Required]ο
Generated text output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-2 | exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-3 | property lc_serializable: boolο
This class is LangChain serializable.
class langchain.schema.BaseMessage(*, content, additional_kwargs=None)[source]ο
Bases: langchain.load.serializable.Serializable
Message object.
Parameters
content (str) β
additional_kwargs (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-4 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-5 | constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
abstract property type: strο
Type of the message, used for serialization.
class langchain.schema.HumanMessage(*, content, additional_kwargs=None, example=False)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is spoken by the human.
Parameters
content (str) β
additional_kwargs (dict) β
example (bool) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-6 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-7 | models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.AIMessage(*, content, additional_kwargs=None, example=False)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is spoken by the AI.
Parameters
content (str) β
additional_kwargs (dict) β
example (bool) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-8 | Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-9 | Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.SystemMessage(*, content, additional_kwargs=None)[source]ο
Bases: langchain.schema.BaseMessage
Type of message that is a system message.
Parameters
content (str) β
additional_kwargs (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-10 | Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-11 | exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-12 | property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.FunctionMessage(*, content, additional_kwargs=None, name)[source]ο
Bases: langchain.schema.BaseMessage
Parameters
content (str) β
additional_kwargs (dict) β
name (str) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-13 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-14 | constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
class langchain.schema.ChatMessage(*, content, additional_kwargs=None, role)[source]ο
Bases: langchain.schema.BaseMessage
Type of message with arbitrary speaker.
Parameters
content (str) β
additional_kwargs (dict) β
role (str) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-15 | the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-16 | Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
property type: strο
Type of the message, used for serialization.
langchain.schema.messages_to_dict(messages)[source]ο
Convert messages to dict.
Parameters
messages (List[langchain.schema.BaseMessage]) β List of messages to convert.
Returns
List of dicts.
Return type
List[dict]
langchain.schema.messages_from_dict(messages)[source]ο
Convert messages from dict.
Parameters
messages (List[dict]) β List of messages (dicts) to convert.
Returns
List of messages (BaseMessages).
Return type
List[langchain.schema.BaseMessage]
class langchain.schema.ChatGeneration(*, text='', generation_info=None, message)[source]ο
Bases: langchain.schema.Generation
Output of a single generation.
Parameters
text (str) β
generation_info (Optional[Dict[str, Any]]) β
message (langchain.schema.BaseMessage) β
Return type
None | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-17 | message (langchain.schema.BaseMessage) β
Return type
None
attribute generation_info: Optional[Dict[str, Any]] = Noneο
Raw generation info response from the provider
attribute text: str = ''ο
Generated text output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-18 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-19 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
This class is LangChain serializable.
class langchain.schema.RunInfo(*, run_id)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant metadata for a Run.
Parameters
run_id (uuid.UUID) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-20 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-21 | Parameters
localns (Any) β
Return type
None
class langchain.schema.ChatResult(*, generations, llm_output=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for a Chat Result.
Parameters
generations (List[langchain.schema.ChatGeneration]) β
llm_output (Optional[dict]) β
Return type
None
attribute generations: List[langchain.schema.ChatGeneration] [Required]ο
List of the things generated.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-22 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-23 | Parameters
localns (Any) β
Return type
None
class langchain.schema.LLMResult(*, generations, llm_output=None, run=None)[source]ο
Bases: pydantic.main.BaseModel
Class that contains all relevant information for an LLM Result.
Parameters
generations (List[List[langchain.schema.Generation]]) β
llm_output (Optional[dict]) β
run (Optional[List[langchain.schema.RunInfo]]) β
Return type
None
attribute generations: List[List[langchain.schema.Generation]] [Required]ο
List of the things generated. This is List[List[]] because
each input could have multiple generations.
attribute llm_output: Optional[dict] = Noneο
For arbitrary LLM provider specific output.
attribute run: Optional[List[langchain.schema.RunInfo]] = Noneο
Run metadata.
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-24 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
flatten()[source]ο
Flatten generations into a single list.
Return type
List[langchain.schema.LLMResult]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-25 | exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
class langchain.schema.PromptValue[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-26 | self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract to_messages()[source]ο
Return prompt as messages.
Return type
List[langchain.schema.BaseMessage]
abstract to_string()[source]ο
Return prompt as string. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-27 | abstract to_string()[source]ο
Return prompt as string.
Return type
str
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseMemory[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC
Base interface for memory in chains.
Return type
None
abstract clear()[source]ο
Clear memory contents.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-28 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-29 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
abstract save_context(inputs, outputs)[source]ο
Save the context of this model run to memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-30 | property lc_serializable: boolο
Return whether or not the class is serializable.
abstract property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.schema.BaseChatMessageHistory[source]ο
Bases: abc.ABC
Base interface for chat message history
See ChatMessageHistory for default implementation.
add_user_message(message)[source]ο
Add a user message to the store
Parameters
message (str) β
Return type
None
add_ai_message(message)[source]ο
Add an AI message to the store
Parameters
message (str) β
Return type
None
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
abstract clear()[source]ο
Remove all messages from the store
Return type
None
class langchain.schema.Document(*, page_content, metadata=None)[source]ο
Bases: langchain.load.serializable.Serializable
Interface for interacting with a document.
Parameters
page_content (str) β
metadata (dict) β
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-31 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-32 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseRetriever[source]ο
Bases: abc.ABC
Base interface for retrievers.
abstract get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
abstract async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-33 | Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
langchain.schema.Memoryο
alias of langchain.schema.BaseMemory
class langchain.schema.BaseLLMOutputParser[source]ο
Bases: langchain.load.serializable.Serializable, abc.ABC, Generic[langchain.schema.T]
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-34 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
Return type
DictStrAny
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-35 | Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.BaseOutputParser[source]ο
Bases: langchain.schema.BaseLLMOutputParser, abc.ABC, Generic[langchain.schema.T]
Class to parse the output of an LLM call.
Output parsers help structure language model responses.
Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-36 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)[source]ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
abstract parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.schema.T | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-37 | Returns
structured output
Return type
langchain.schema.T
parse_result(result)[source]ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.schema.NoOpOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser[str]
Output parser that just returns the text as is.
Return type
None
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-38 | Return type
None
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return dictionary representation of output parser.
Parameters
kwargs (Any) β
Return type
Dict
get_format_instructions()ο
Instructions on how the LLM output should be formatted.
Return type
str
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-39 | Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
str
parse_result(result)ο
Parse LLM Result.
Parameters
result (List[langchain.schema.Generation]) β
Return type
langchain.schema.T
parse_with_prompt(completion, prompt)ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt (langchain.schema.PromptValue) β prompt value
Returns
structured output
Return type
Any
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-40 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
exception langchain.schema.OutputParserException(error, observation=None, llm_output=None, send_to_llm=False)[source]ο
Bases: ValueError
Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. OutputParserExceptions will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
Parameters
error (Any) β
observation (str | None) β
llm_output (str | None) β
send_to_llm (bool) β
add_note()ο
Exception.add_note(note) β
add a note to the exception
with_traceback()ο
Exception.with_traceback(tb) β
set self.__traceback__ to tb and return self.
class langchain.schema.BaseDocumentTransformer[source]ο
Bases: abc.ABC
Base interface for transforming documents.
abstract transform_documents(documents, **kwargs)[source]ο
Transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
abstract async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a list of documents. | https://api.python.langchain.com/en/latest/modules/base_classes.html |
2dbde574187e-41 | Asynchronously transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/base_classes.html |
f09a5cce5e87-0 | Chat Modelsο
class langchain.chat_models.ChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-1 | max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: Optional[int] = Noneο
Maximum number of tokens to generate.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt.
attribute openai_api_base: Optional[str] = Noneο
attribute openai_api_key: Optional[str] = Noneο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_organization: Optional[str] = Noneο
attribute openai_proxy: Optional[str] = Noneο
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-2 | be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
completion_with_retry(**kwargs)[source]ο
Use tenacity to retry the completion call.
Parameters
kwargs (Any) β
Return type
Any
get_num_tokens_from_messages(messages)[source]ο
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)[source]ο
Get the tokens present in the text with tiktoken package.
Parameters
text (str) β
Return type
List[int]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-3 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.AzureChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key='', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the βModel deployment nameβ in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-4 | Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (str) β
openai_api_base (str) β
openai_organization (str) β
openai_proxy (str) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
deployment_name (str) β
openai_api_type (str) β
openai_api_version (str) β
Return type
None
attribute deployment_name: str = ''ο
attribute openai_api_base: str = ''ο
attribute openai_api_key: str = ''ο
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
attribute openai_api_type: str = 'azure'ο
attribute openai_api_version: str = ''ο
attribute openai_organization: str = ''ο
attribute openai_proxy: str = ''ο
class langchain.chat_models.FakeListChatModel(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]ο
Bases: langchain.chat_models.base.SimpleChatModel
Fake ChatModel for testing purposes.
Parameters | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-5 | Fake ChatModel for testing purposes.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
responses (List) β
i (int) β
Return type
None
attribute i: int = 0ο
attribute responses: List [Required]ο
class langchain.chat_models.PromptLayerChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]ο
Bases: langchain.chat_models.openai.ChatOpenAI
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags (Optional[List[str]]) β List of strings to tag the request with.
return_pl_id (Optional[bool]) β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
cache (Optional[bool]) β | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-6 | Generation object.
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
max_retries (int) β
streaming (bool) β
n (int) β
max_tokens (Optional[int]) β
tiktoken_model_name (Optional[str]) β
Return type
None
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
attribute pl_tags: Optional[List[str]] = Noneο
attribute return_pl_id: Optional[bool] = Falseο
class langchain.chat_models.ChatAnthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, langchain.llms.anthropic._AnthropicCommon | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-7 | Wrapper around Anthropicβs large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-8 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.chat_models.ChatGooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='models/chat-bison-001', google_api_key=None, temperature=None, top_p=None, top_k=None, n=1)[source]ο
Bases: langchain.chat_models.base.BaseChatModel, pydantic.main.BaseModel
Wrapper around Googleβs PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
google_api_key (Optional[str]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
top_k (Optional[int]) β
n (int) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute model_name: str = 'models/chat-bison-001'ο
Model name to use.
attribute n: int = 1ο
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated. | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-9 | not return the full n completions if duplicates are generated.
attribute temperature: Optional[float] = Noneο
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
attribute top_k: Optional[int] = Noneο
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
attribute top_p: Optional[float] = Noneο
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
class langchain.chat_models.ChatVertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='chat-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5)[source]ο
Bases: langchain.llms.vertexai._VertexAICommon, langchain.chat_models.base.BaseChatModel
Wrapper around Vertex AI large language models.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (_LanguageModel) β
model_name (str) β
temperature (float) β
max_output_tokens (int) β
top_p (float) β
top_k (int) β
stop (Optional[List[str]]) β
project (Optional[str]) β
location (str) β
credentials (Any) β | https://api.python.langchain.com/en/latest/modules/chat_models.html |
f09a5cce5e87-10 | location (str) β
credentials (Any) β
request_parallelism (int) β
Return type
None
attribute model_name: str = 'chat-bison'ο
Model name to use. | https://api.python.langchain.com/en/latest/modules/chat_models.html |
45b46cec0462-0 | LLMsο
Wrappers on top of large language models APIs.
class langchain.llms.AI21(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='j2-jumbo-instruct', temperature=0.7, maxTokens=256, minTokens=0, topP=1.0, presencePenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults=1, logitBias=None, ai21_api_key=None, stop=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
temperature (float) β
maxTokens (int) β
minTokens (int) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-1 | maxTokens (int) β
minTokens (int) β
topP (float) β
presencePenalty (langchain.llms.ai21.AI21PenaltyData) β
countPenalty (langchain.llms.ai21.AI21PenaltyData) β
frequencyPenalty (langchain.llms.ai21.AI21PenaltyData) β
numResults (int) β
logitBias (Optional[Dict[str, float]]) β
ai21_api_key (Optional[str]) β
stop (Optional[List[str]]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to count.
attribute frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens according to frequency.
attribute logitBias: Optional[Dict[str, float]] = Noneο
Adjust the probability of specific tokens being generated.
attribute maxTokens: int = 256ο
The maximum number of tokens to generate in the completion.
attribute minTokens: int = 0ο
The minimum number of tokens to generate in the completion.
attribute model: str = 'j2-jumbo-instruct'ο
Model name to use. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-2 | Model name to use.
attribute numResults: int = 1ο
How many completions to generate for each prompt.
attribute presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)ο
Penalizes repeated tokens.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute topP: float = 1.0ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-3 | async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-4 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-5 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-6 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-7 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AlephAlpha(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='luminous-base', maximum_tokens=64, temperature=0.0, top_k=0, top_p=0.0, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalties_include_prompt=False, use_multiplicative_presence_penalty=False, penalty_bias=None, penalty_exceptions=None, penalty_exceptions_include_stop_sequences=None, best_of=None, n=1, logit_bias=None, log_probs=None, tokens=False, disable_optimizations=False, minimum_tokens=0, echo=False, use_multiplicative_frequency_penalty=False, sequence_penalty=0.0, sequence_penalty_min_length=2, use_multiplicative_sequence_penalty=False, completion_bias_inclusion=None, completion_bias_inclusion_first_token_only=False, completion_bias_exclusion=None, completion_bias_exclusion_first_token_only=False, contextual_control_threshold=None, control_log_additive=True, repetition_penalties_include_completion=True, raw_completion=False, aleph_alpha_api_key=None, stop_sequences=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10
Example
from langchain.llms import AlephAlpha | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-8 | Example
from langchain.llms import AlephAlpha
aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (Optional[str]) β
maximum_tokens (int) β
temperature (float) β
top_k (int) β
top_p (float) β
presence_penalty (float) β
frequency_penalty (float) β
repetition_penalties_include_prompt (Optional[bool]) β
use_multiplicative_presence_penalty (Optional[bool]) β
penalty_bias (Optional[str]) β
penalty_exceptions (Optional[List[str]]) β
penalty_exceptions_include_stop_sequences (Optional[bool]) β
best_of (Optional[int]) β
n (int) β
logit_bias (Optional[Dict[int, float]]) β
log_probs (Optional[int]) β
tokens (Optional[bool]) β
disable_optimizations (Optional[bool]) β
minimum_tokens (Optional[int]) β
echo (bool) β
use_multiplicative_frequency_penalty (bool) β
sequence_penalty (float) β
sequence_penalty_min_length (int) β
use_multiplicative_sequence_penalty (bool) β
completion_bias_inclusion (Optional[Sequence[str]]) β
completion_bias_inclusion_first_token_only (bool) β
completion_bias_exclusion (Optional[Sequence[str]]) β
completion_bias_exclusion_first_token_only (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-9 | completion_bias_exclusion_first_token_only (bool) β
contextual_control_threshold (Optional[float]) β
control_log_additive (Optional[bool]) β
repetition_penalties_include_completion (bool) β
raw_completion (bool) β
aleph_alpha_api_key (Optional[str]) β
stop_sequences (Optional[List[str]]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute best_of: Optional[int] = Noneο
returns the one with the βbest ofβ results
(highest log probability per token)
attribute completion_bias_exclusion_first_token_only: bool = Falseο
Only consider the first token for the completion_bias_exclusion.
attribute contextual_control_threshold: Optional[float] = Noneο
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
attribute control_log_additive: Optional[bool] = Trueο
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
attribute echo: bool = Falseο
Echo the prompt in the completion.
attribute frequency_penalty: float = 0.0ο
Penalizes repeated tokens according to frequency.
attribute log_probs: Optional[int] = Noneο
Number of top log probabilities to be returned for each generated token.
attribute logit_bias: Optional[Dict[int, float]] = Noneο
The logit bias allows to influence the likelihood of generating tokens.
attribute maximum_tokens: int = 64ο
The maximum number of tokens to be generated. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-10 | The maximum number of tokens to be generated.
attribute minimum_tokens: Optional[int] = 0ο
Generate at least this number of tokens.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute penalty_bias: Optional[str] = Noneο
Penalty bias for the completion.
attribute penalty_exceptions: Optional[List[str]] = Noneο
List of strings that may be generated without penalty,
regardless of other penalty settings
attribute penalty_exceptions_include_stop_sequences: Optional[bool] = Noneο
Should stop_sequences be included in penalty_exceptions.
attribute presence_penalty: float = 0.0ο
Penalizes repeated tokens.
attribute raw_completion: bool = Falseο
Force the raw completion of the model to be returned.
attribute repetition_penalties_include_completion: bool = Trueο
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
attribute repetition_penalties_include_prompt: Optional[bool] = Falseο
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
attribute stop_sequences: Optional[List[str]] = Noneο
Stop sequences to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.0ο
A non-negative float that tunes the degree of randomness in generation.
attribute tokens: Optional[bool] = Falseο
return tokens of completion.
attribute top_k: int = 0ο
Number of most likely tokens to consider at each step.
attribute top_p: float = 0.0ο
Total probability mass of tokens to consider at each step. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-11 | Total probability mass of tokens to consider at each step.
attribute use_multiplicative_presence_penalty: Optional[bool] = Falseο
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-12 | Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-13 | Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-14 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-15 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.AmazonAPIGateway(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url, model_kwargs=None, content_handler=<langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around custom Amazon API Gateway
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
api_url (str) β
model_kwargs (Optional[Dict]) β
content_handler (langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway) β
Return type
None
attribute api_url: str [Required]ο
API Gateway URL
attribute content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-16 | output transform functions to handle formats between LLM
and the endpoint.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-17 | Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-18 | Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-19 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-20 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]ο
Bases: langchain.llms.base.LLM, langchain.llms.anthropic._AnthropicCommon
Wrapper around Anthropicβs large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-21 | # Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Parameters
client (Any) β
model (str) β
max_tokens_to_sample (int) β
temperature (Optional[float]) β
top_k (Optional[int]) β
top_p (Optional[float]) β
streaming (bool) β
default_request_timeout (Optional[Union[float, Tuple[float, float]]]) β
anthropic_api_url (Optional[str]) β
anthropic_api_key (Optional[str]) β
HUMAN_PROMPT (Optional[str]) β
AI_PROMPT (Optional[str]) β
count_tokens (Optional[Callable[[str], int]]) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
Return type
None
attribute default_request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
attribute max_tokens_to_sample: int = 256ο
Denotes the number of tokens to predict per generation.
attribute model: str = 'claude-v1'ο
Model name to use.
attribute streaming: bool = Falseο
Whether to stream the results.
attribute tags: Optional[List[str]] = Noneο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-22 | Whether to stream the results.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = Noneο
A non-negative float that tunes the degree of randomness in generation.
attribute top_k: Optional[int] = Noneο
Number of most likely tokens to consider at each step.
attribute top_p: Optional[float] = Noneο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-23 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-24 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)[source]ο
Calculate number of tokens.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-25 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)[source]ο
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompt to pass into the model. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-26 | Parameters
prompt (str) β The prompt to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Return type
Generator
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Anyscale(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_kwargs=None, anyscale_service_url=None, anyscale_service_route=None, anyscale_service_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-27 | Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_kwargs (Optional[dict]) β
anyscale_service_url (Optional[str]) β
anyscale_service_route (Optional[str]) β
anyscale_service_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model. Reserved for future use
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-28 | kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/modules/llms.html |