id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
14c2a1c4ad3b-21 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-22 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-23 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_azure_settings » all fields
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the “best”.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-24 | Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-25 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-26 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-27 | Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-28 | Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Banana[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-29 | pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Banana
banana = Banana(model_key="")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_key: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-30 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-31 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-32 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Baseten[source]#
Use your Baseten models in Langchain
To use, you should have the baseten python package installed,
and run baseten.login() with your Baseten API key.
The required model param can be either a model id or model | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-33 | The required model param can be either a model id or model
version id. Using a model version ID will result in
slightly faster invocation.
Any other model parameters can also
be passed in with the format input={model_param: value, …}
The Baseten model must accept a dictionary of input with the key
“prompt” and return a dictionary with a key “data” which maps
to a list of response strings.
Example
Validators
raise_deprecation » all fields
set_verbose » verbose
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-34 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-35 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-36 | save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Beam[source]#
Wrapper around Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get these is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
Example
llm = Beam(model_name="gpt2",
name="langchain-gpt2",
cpu=8,
memory="32Gi",
gpu="A10G", | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-37 | memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field url: str = ''#
model endpoint to use
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-38 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
app_creation() → None[source]#
Creates a Python file which will contain your Beam app definition.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-39 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-40 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
run_creation() → None[source]#
Creates a Python file which will be deployed on beam.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Bedrock[source]#
LLM provider to invoke Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials: | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-41 | To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field model_id: str [Required]#
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-42 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-43 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-44 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-45 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.CTransformers[source]#
Wrapper around the C Transformers LLM interface.
To use, you should have the ctransformers python package installed.
See marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field config: Optional[Dict[str, Any]] = None#
The config parameters.
See marella/ctransformers
field lib: Optional[str] = None#
The path to a shared library or one of avx2, avx, basic.
field model: str [Required]#
The path to a model file or directory or the name of a Hugging Face Hub
model repo.
field model_file: Optional[str] = None#
The name of the model file in repo or directory.
field model_type: Optional[str] = None#
The model type.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-46 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-47 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-48 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-49 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.CerebriumAI[source]#
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import CerebriumAI
cerebrium = CerebriumAI(endpoint_url="")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-50 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-51 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-52 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-53 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency. Between 0 and 1.
field k: int = 0#
Number of most likely tokens to consider at each step.
field max_retries: int = 10#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = None#
Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-54 | A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-55 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-56 | get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-57 | property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Databricks[source]#
LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have “Can Query” permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster driver proxy app (recommended for interactive development).
One can load an LLM on a Databricks interactive cluster and start a local HTTP
server on the driver node to serve the model at / using HTTP POST method
with JSON input/output.
Please use a port number between [3000, 8000] and let the server listen to
the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have “Can Attach To” permission to the cluster. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-58 | Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}`
outputs: {"type": "string"}
If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Validators
raise_deprecation » all fields
set_cluster_driver_port » cluster_driver_port
set_cluster_id » cluster_id
set_model_kwargs » model_kwargs
set_verbose » verbose
field api_token: str [Optional]#
Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in “single user” or
“no isolation shared” mode.
field cluster_driver_port: Optional[str] = None#
The port number used by the HTTP server running on the cluster driver node.
The server should listen on the driver IP address or simply 0.0.0.0 to connect.
We recommend the server using a port number between [3000, 8000].
field cluster_id: Optional[str] = None#
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode, the current cluster ID is used as default. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-59 | or “no isolation shared” mode, the current cluster ID is used as default.
You must not set both endpoint_name and cluster_id.
field endpoint_name: Optional[str] = None#
Name of the model serving endpont.
You must specify the endpoint name to connect to a model serving endpoint.
You must not set both endpoint_name and cluster_id.
field host: str [Optional]#
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode.
field model_kwargs: Optional[Dict[str, Any]] = None#
Extra parameters to pass to the endpoint.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
field transform_output_fn: Optional[Callable[[...], str]] = None#
A function that transforms the output from the endpoint to the generated text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-60 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-61 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-62 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-63 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.DeepInfra[source]#
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-64 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-65 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-66 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.FakeListLLM[source]#
Fake LLM wrapper for testing purposes.
Validators
raise_deprecation » all fields
set_verbose » verbose
field tags: Optional[List[str]] = None#
Tags to add to the run trace. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-67 | field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-68 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-69 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-70 | property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_url: str = ''#
Model name to use.
field length: int = 256#
The maximum number of tokens to generate in the completion.
field repetition_penalty: int = 1#
Penalizes repeated tokens according to frequency.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 40#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-71 | Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-72 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-73 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-74 | property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.GPT4All[source]#
Wrapper around GPT4All language models.
To use, you should have the gpt4all python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field allow_download: bool = False#
If model does not exist in ~/.cache/gpt4all/, download it.
field context_erase: float = 0.5#
Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out.
field echo: Optional[bool] = False#
Whether to echo the prompt.
field embedding: bool = False#
Use embedding mode only.
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-75 | Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Optional[int] = 256#
The maximum number of tokens to generate.
field n_threads: Optional[int] = 4#
Number of threads to use.
field repeat_last_n: Optional[int] = 64#
Last n tokens to penalize
field repeat_penalty: Optional[float] = 1.3#
The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-76 | field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-77 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-78 | get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-79 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.GooglePalm[source]#
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field max_output_tokens: Optional[int] = None#
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
field model_name: str = 'models/text-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.7#
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
field verbose: bool [Optional]#
Whether to print out response text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-80 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-81 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-82 | get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-83 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field min_tokens: int = 1#
The minimum number of tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-84 | Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-neo-20b'#
Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-85 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-86 | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-87 | predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-88 | Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
Endpoint URL to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field task: Optional[str] = None#
Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-89 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-90 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-91 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.HuggingFaceHub[source]#
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-92 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'gpt2'#
Model name to use.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field task: Optional[str] = None#
Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-93 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-94 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-95 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.HuggingFacePipeline[source]#
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-96 | Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Validators
raise_deprecation » all fields
set_verbose » verbose
field model_id: str = 'gpt2'#
Model name to use.
field model_kwargs: Optional[dict] = None#
Key word arguments passed to the model.
field pipeline_kwargs: Optional[dict] = None#
Key word arguments passed to the pipeline.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-97 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-98 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-99 | Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-100 | property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.HuggingFaceTextGenInference[source]#
HuggingFace text generation inference API.
This class is a wrapper around the HuggingFace text generation inference API.
It is used to generate text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when generating text.
- repetition_penalty: The repetition penalty to use when generating text.
- stop_sequences: A list of stop sequences to use when generating text.
- seed: The seed to use when generating text.
- inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server.
- client: The client object used to communicate with the inference server.
Methods:
- _call: Generates text based on a given prompt and stop sequences.
- _llm_type: Returns the type of LLM.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-101 | Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-102 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-103 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-104 | property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.HumanInputLLM[source]#
A LLM wrapper which returns user input as the response.
Validators
raise_deprecation » all fields
set_verbose » verbose
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-105 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-106 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-107 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.LlamaCpp[source]#
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-108 | path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field f16_kv: bool = True#
Use half-precision for key/value cache.
field last_n_tokens_size: Optional[int] = 64#
The number of tokens to look back when applying the repeat_penalty.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field logprobs: Optional[int] = None#
The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-109 | field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use.
If None, the number of threads is automatically determined.
field repeat_penalty: Optional[float] = 1.1#
The penalty to apply to repeated tokens.
field seed: int = -1#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = True#
Whether to stream the results, token by token.
field suffix: Optional[str] = None#
A suffix to append to the generated text. If None, no suffix is appended.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded in RAM
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-110 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-111 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int[source]#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-112 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) → Generator[Dict, None, None][source]#
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
It also calls the callback manager’s on_llm_new_token event with | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-113 | It also calls the callback manager’s on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
“]):result = chunk[“choices”][0]
print(result[“text”], end=’’, flush=True)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Modal[source]#
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-114 | To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Validators
build_extra » all fields
raise_deprecation » all fields
set_verbose » verbose
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-115 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-116 | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-117 | predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.MosaicML[source]#
Wrapper around MosaicML’s LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicML
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict" | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-118 | )
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'#
Endpoint URL to use.
field inject_instruction_format: bool = False#
Whether to inject the instruction format into the prompt.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-119 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-120 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.