id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
---|---|---|
8cfe372708b4-0 | langchain.llms.openai.BaseOpenAI¶
class langchain.llms.openai.BaseOpenAI[source]¶
Bases: BaseLLM
Base OpenAI large language model class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-1 | Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value, | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-4 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-5 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult[source]¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls, | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-6 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]][source]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int][source]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-7 | get_token_ids(text: str) → List[int][source]¶
Get the token IDs using the tiktoken package.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int[source]¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int[source]¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003") | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-8 | max_tokens = openai.modelname_to_contextsize("text-davinci-003")
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-9 | to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
8cfe372708b4-10 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
940ba16d3251-0 | langchain.llms.tongyi.generate_with_retry¶
langchain.llms.tongyi.generate_with_retry(llm: Tongyi, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.generate_with_retry.html |
fb5af9ff7e50-0 | langchain.llms.openai.acompletion_with_retry¶
async langchain.llms.openai.acompletion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶
Use tenacity to retry the async completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.acompletion_with_retry.html |
fedee7b712a4-0 | langchain.llms.textgen.TextGen¶
class langchain.llms.textgen.TextGen[source]¶
Bases: LLM
text-generation-webui models.
To use, you should have the text-generation-webui installed, a model loaded,
and –api added as a command-line option.
Suggested installation, use one-click installer for your OS:
https://github.com/oobabooga/text-generation-webui#one-click-installers
Parameters below taken from text-generation-webui api example:
https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py
Example
from langchain.llms import TextGen
llm = TextGen(model_url="http://localhost:8500")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param add_bos_token: bool = True¶
Add the bos_token to the beginning of prompts.
Disabling this can make the replies more creative.
param ban_eos_token: bool = False¶
Ban the eos_token. Forces the model to never end the generation prematurely.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param do_sample: bool = True¶
Do sample
param early_stopping: bool = False¶
Early stopping
param epsilon_cutoff: Optional[float] = 0¶
Epsilon cutoff
param eta_cutoff: Optional[float] = 0¶
ETA cutoff
param length_penalty: Optional[float] = 1¶
Length Penalty
param max_new_tokens: Optional[int] = 250¶
The maximum number of tokens to generate.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace. | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-1 | Metadata to add to the run trace.
param min_length: Optional[int] = 0¶
Minimum generation length in tokens.
param model_url: str [Required]¶
The full URL to the textgen webui including http[s]://host:port
param no_repeat_ngram_size: Optional[int] = 0¶
If not set to 0, specifies the length of token sets that are completely blocked
from repeating at all. Higher values = blocks larger phrases,
lower values = blocks words or letters from repeating.
Only 0 or high values are a good idea in most cases.
param num_beams: Optional[int] = 1¶
Number of beams
param penalty_alpha: Optional[float] = 0¶
Penalty Alpha
param preset: Optional[str] = None¶
The preset to use in the textgen webui
param repetition_penalty: Optional[float] = 1.18¶
Exponential penalty factor for repeating prior tokens. 1 means no penalty,
higher value = less repetition, lower value = more repetition.
param seed: int = -1¶
Seed (-1 for random)
param skip_special_tokens: bool = True¶
Skip special tokens. Some specific models need this unset.
param stopping_strings: Optional[List[str]] = []¶
A list of strings to stop generation when encountered.
param streaming: bool = False¶
Whether to stream the results, token by token (currently unimplemented).
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: Optional[float] = 1.3¶
Primary factor to control randomness of outputs. 0 = deterministic
(only the most likely token is used). Higher value = more randomness.
param top_k: Optional[float] = 40¶
Similar to top_p, but select instead only the top_k most likely tokens. | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-2 | Similar to top_p, but select instead only the top_k most likely tokens.
Higher value = higher range of possible random results.
param top_p: Optional[float] = 0.1¶
If not set to 1, select tokens with probabilities adding up to less than this
number. Higher value = higher range of possible random results.
param truncation_length: Optional[int] = 2048¶
Truncate the prompt up to this length. The leftmost tokens are removed if
the prompt exceeds this length. Most models require this to be at most 2048.
param typical_p: Optional[float] = 1¶
If not set to 1, select only tokens that are at least this much more likely to
appear than random tokens, given the prior text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-3 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-4 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-5 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-6 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-7 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-8 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-9 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
fedee7b712a4-10 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using TextGen¶
TextGen | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
52060ac4f403-0 | langchain.llms.openllm.OpenLLM¶
class langchain.llms.openllm.OpenLLM[source]¶
Bases: LLM
OpenLLM, supporting both in-process model
instance and remote OpenLLM servers.
To use, you should have the openllm library installed:
pip install openllm
Learn more at: https://github.com/bentoml/openllm
Example running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM
llm = OpenLLM(
model_name='flan-t5',
model_id='google/flan-t5-large',
)
llm("What is the difference between a duck and a goose?")
For all available supported models, you can run ‘openllm models’.
If you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose?")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param embedded: bool = True¶
Initialize this LLM instance in current process by default. Should
only set to False when using in conjunction with BentoML Service.
param llm_kwargs: Dict[str, Any] [Required]¶
Key word arguments to be passed to openllm.LLM
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: Optional[str] = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-1 | Metadata to add to the run trace.
param model_id: Optional[str] = None¶
Model Id to use. If not provided, will use the default model for the model name.
See ‘openllm models’ for all available model variants.
param model_name: Optional[str] = None¶
Model name to use. See ‘openllm models’ for all available models.
param server_type: ServerType = 'http'¶
Optional server type. Either ‘http’ or ‘grpc’.
param server_url: Optional[str] = None¶
Optional server URL that currently runs a LLMServer with ‘openllm start’.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
52060ac4f403-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property runner: openllm.LLMRunner¶
Get the underlying openllm.LLMRunner instance for integration with BentoML.
Example:
.. code-block:: python
llm = OpenLLM(model_name=’flan-t5’,
model_id=’google/flan-t5-large’,
embedded=False,
)
tools = load_tools([“serpapi”, “llm-math”], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service(“langchain-openllm”, runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
return agent.run(input_text)
Examples using OpenLLM¶
OpenLLM | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
9c56a846455e-0 | langchain.llms.aleph_alpha.AlephAlpha¶
class langchain.llms.aleph_alpha.AlephAlpha[source]¶
Bases: LLM
Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10
Example
from langchain.llms import AlephAlpha
aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param best_of: Optional[int] = None¶
returns the one with the “best of” results
(highest log probability per token)
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param completion_bias_exclusion: Optional[Sequence[str]] = None¶
param completion_bias_exclusion_first_token_only: bool = False¶
Only consider the first token for the completion_bias_exclusion.
param completion_bias_inclusion: Optional[Sequence[str]] = None¶
param completion_bias_inclusion_first_token_only: bool = False¶
param contextual_control_threshold: Optional[float] = None¶
If set to None, attention control parameters only apply to those tokens that have | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-1 | If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
param control_log_additive: Optional[bool] = True¶
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
param disable_optimizations: Optional[bool] = False¶
param echo: bool = False¶
Echo the prompt in the completion.
param frequency_penalty: float = 0.0¶
Penalizes repeated tokens according to frequency.
param host: str = 'https://api.aleph-alpha.com'¶
The hostname of the API host.
The default one is “https://api.aleph-alpha.com”)
param hosting: Optional[str] = None¶
Determines in which datacenters the request may be processed.
You can either set the parameter to “aleph-alpha” or omit it (defaulting to None).
Not setting this value, or setting it to None, gives us maximal
flexibility in processing your request in our
own datacenters and on servers hosted with other providers.
Choose this option for maximal availability.
Setting it to “aleph-alpha” allows us to only process the
request in our own datacenters.
Choose this option for maximal data privacy.
param log_probs: Optional[int] = None¶
Number of top log probabilities to be returned for each generated token.
param logit_bias: Optional[Dict[int, float]] = None¶
The logit bias allows to influence the likelihood of generating tokens.
param maximum_tokens: int = 64¶
The maximum number of tokens to be generated.
param metadata: Optional[Dict[str, Any]] = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-2 | param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param minimum_tokens: Optional[int] = 0¶
Generate at least this number of tokens.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param nice: bool = False¶
Setting this to True, will signal to the API that you intend to be
nice to other users
by de-prioritizing your request below concurrent ones.
param penalty_bias: Optional[str] = None¶
Penalty bias for the completion.
param penalty_exceptions: Optional[List[str]] = None¶
List of strings that may be generated without penalty,
regardless of other penalty settings
param penalty_exceptions_include_stop_sequences: Optional[bool] = None¶
Should stop_sequences be included in penalty_exceptions.
param presence_penalty: float = 0.0¶
Penalizes repeated tokens.
param raw_completion: bool = False¶
Force the raw completion of the model to be returned.
param repetition_penalties_include_completion: bool = True¶
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
param repetition_penalties_include_prompt: Optional[bool] = False¶
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
param request_timeout_seconds: int = 305¶
Client timeout that will be set for HTTP requests in the
requests library’s API calls.
Server will close all requests after 300 seconds with an internal server error.
param sequence_penalty: float = 0.0¶
param sequence_penalty_min_length: int = 2¶
param stop_sequences: Optional[List[str]] = None¶
Stop sequences to use.
param tags: Optional[List[str]] = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-3 | Stop sequences to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
A non-negative float that tunes the degree of randomness in generation.
param tokens: Optional[bool] = False¶
return tokens of completion.
param top_k: int = 0¶
Number of most likely tokens to consider at each step.
param top_p: float = 0.0¶
Total probability mass of tokens to consider at each step.
param total_retries: int = 8¶
The number of retries made in case requests fail with certain retryable
status codes. If the last
retry fails a corresponding exception is raised. Note, that between retries
an exponential backoff
is applied, starting with 0.5 s after the first retry and doubling for
each retry made. So with the
default setting of 8 retries a total wait time of 63.5 s is added
between the retries.
param use_multiplicative_frequency_penalty: bool = False¶
param use_multiplicative_presence_penalty: Optional[bool] = False¶
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
param use_multiplicative_sequence_penalty: bool = False¶
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-4 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models). | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-5 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-6 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-7 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-8 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-9 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-10 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
9c56a846455e-11 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using AlephAlpha¶
Aleph Alpha | https://api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html |
ba7db26775b2-0 | langchain.llms.petals.Petals¶
class langchain.llms.petals.Petals[source]¶
Bases: LLM
Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
The client to use for the API calls.
param do_sample: bool = True¶
Whether or not to use sampling; use greedy decoding otherwise.
param huggingface_api_key: Optional[str] = None¶
param max_length: Optional[int] = None¶
The maximum length of the sequence to be generated.
param max_new_tokens: int = 256¶
The maximum number of new tokens to generate in the completion.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call
not explicitly specified.
param model_name: str = 'bigscience/bloom-petals'¶
The model to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use
param tokenizer: Any = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-1 | What sampling temperature to use
param tokenizer: Any = None¶
The tokenizer to use for the API calls.
param top_k: Optional[int] = None¶
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
param top_p: float = 0.9¶
The cumulative probability for top-p sampling.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-3 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”) | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
ba7db26775b2-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Petals¶
Petals | https://api.python.langchain.com/en/latest/llms/langchain.llms.petals.Petals.html |
fdb373857b48-0 | langchain_experimental.llms.rellm_decoder.import_rellm¶
langchain_experimental.llms.rellm_decoder.import_rellm() → rellm[source]¶
Lazily import rellm. | https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.import_rellm.html |
2b0cebe4a417-0 | langchain.llms.minimax.Minimax¶
class langchain.llms.minimax.Minimax[source]¶
Bases: LLM
Wrapper around Minimax large language models.
To use, you should have the environment variable
MINIMAX_API_KEY and MINIMAX_GROUP_ID set with your API key,
or pass them as a named parameter to the constructor.
.. rubric:: Example
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param max_tokens: int = 256¶
Denotes the number of tokens to predict per generation.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param minimax_api_host: Optional[str] = None¶
param minimax_api_key: Optional[str] = None¶
param minimax_group_id: Optional[str] = None¶
param model: str = 'abab5.5-chat'¶
Model name to use.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
A non-negative float that tunes the degree of randomness in generation.
param top_p: float = 0.95¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text. | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value, | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string. | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
2b0cebe4a417-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Minimax¶
Minimax | https://api.python.langchain.com/en/latest/llms/langchain.llms.minimax.Minimax.html |
c573674c31da-0 | langchain.llms.vertexai.completion_with_retry¶
langchain.llms.vertexai.completion_with_retry(llm: VertexAI, *args: Any, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.completion_with_retry.html |
029374e3d6aa-0 | langchain.llms.self_hosted.SelfHostedPipeline¶
class langchain.llms.self_hosted.SelfHostedPipeline[source]¶
Bases: LLM
Model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"], | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-1 | model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Init the pipeline with an auxiliary function.
The load function must be in global scope to be imported
and run on the server, i.e. in a module and not a REPL or closure.
Then, initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _generate_text>¶
Inference function to send to the remote hardware.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_load_fn: Callable [Required]¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'torch']¶
Requirements to install on hardware to inference the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text. | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value, | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-4 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-5 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM[source]¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API. | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-6 | This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-7 | Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-8 | Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
029374e3d6aa-9 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using SelfHostedPipeline¶
Runhouse | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
acb83c275746-0 | langchain.llms.huggingface_pipeline.HuggingFacePipeline¶
class langchain.llms.huggingface_pipeline.HuggingFacePipeline[source]¶
Bases: LLM
HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str = 'gpt2'¶
Model name to use.
param model_kwargs: Optional[dict] = None¶
Key word arguments passed to the model.
param pipeline_kwargs: Optional[dict] = None¶
Key word arguments passed to the pipeline.
param tags: Optional[List[str]] = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-1 | Key word arguments passed to the pipeline.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API. | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-2 | This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-3 | to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → LLM[source]¶
Construct the pipeline object from model_id and task.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”) | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
acb83c275746-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using HuggingFacePipeline¶
Hugging Face
RELLM
JSONFormer | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
76a6ee07393e-0 | langchain.llms.databricks.Databricks¶
class langchain.llms.databricks.Databricks[source]¶
Bases: LLM
Databricks serving endpoint or a cluster driver proxy app for LLM.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have “Can Query” permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster driver proxy app (recommended for interactive development).
One can load an LLM on a Databricks interactive cluster and start a local HTTP
server on the driver node to serve the model at / using HTTP POST method
with JSON input/output.
Please use a port number between [3000, 8000] and let the server listen to
the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have “Can Attach To” permission to the cluster.
Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}`
outputs: {"type": "string"}
If the endpoint model signature is different or you want to set extra params, | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-1 | If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_token: str [Optional]¶
Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in “single user” or
“no isolation shared” mode.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param cluster_driver_port: Optional[str] = None¶
The port number used by the HTTP server running on the cluster driver node.
The server should listen on the driver IP address or simply 0.0.0.0 to connect.
We recommend the server using a port number between [3000, 8000].
param cluster_id: Optional[str] = None¶
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode, the current cluster ID is used as default.
You must not set both endpoint_name and cluster_id.
param endpoint_name: Optional[str] = None¶
Name of the model serving endpoint.
You must specify the endpoint name to connect to a model serving endpoint.
You must not set both endpoint_name and cluster_id.
param host: str [Optional]¶
Databricks workspace hostname. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-2 | param host: str [Optional]¶
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in “single user”
or “no isolation shared” mode.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[Dict[str, Any]] = None¶
Extra parameters to pass to the endpoint.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param transform_input_fn: Optional[Callable] = None¶
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
param transform_output_fn: Optional[Callable[[...], str]] = None¶
A function that transforms the output from the endpoint to the generated text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-3 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-4 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-5 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-6 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-7 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-8 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-9 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
76a6ee07393e-10 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Databricks¶
Databricks | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.Databricks.html |
33c7e08273db-0 | langchain.llms.aviary.get_models¶
langchain.llms.aviary.get_models() → List[str][source]¶
List available models | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html |
cffb9fdd9dcf-0 | langchain.llms.koboldai.KoboldApiLLM¶
class langchain.llms.koboldai.KoboldApiLLM[source]¶
Bases: LLM
Kobold API language model.
It includes several fields that can be used to control the text generation process.
To use this class, instantiate it with the required parameters and call it with a
prompt to generate text. For example:
kobold = KoboldApiLLM(endpoint=”http://localhost:5000”)
result = kobold(“Write a story about a dragon.”)
This will send a POST request to the Kobold API with the provided prompt and
generate text.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint: str [Required]¶
The API endpoint to use for generating text.
param max_context_length: Optional[int] = 1600¶
Maximum number of tokens to send to the model.
minimum: 1
param max_length: Optional[int] = 80¶
Number of tokens to generate.
maximum: 512
minimum: 1
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param rep_pen: Optional[float] = 1.12¶
Base repetition penalty value.
minimum: 1
param rep_pen_range: Optional[int] = 1024¶
Repetition penalty range.
minimum: 0
param rep_pen_slope: Optional[float] = 0.9¶
Repetition penalty slope.
minimum: 0
param tags: Optional[List[str]] = None¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html |