id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
---|---|---|
d1a9bcf3f58a-4
|
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-5
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-6
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int[source]¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-7
|
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-8
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-9
|
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
d1a9bcf3f58a-10
|
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using LlamaCpp¶
Llama.cpp
Llama-cpp
Running LLMs locally
Use local LLMs
WebResearchRetriever
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
|
135ca0f4bc84-0
|
langchain.llms.bedrock.Bedrock¶
class langchain.llms.bedrock.Bedrock[source]¶
Bases: LLM
Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param endpoint_url: Optional[str] = None¶
Needed if you don’t want to default to us-east-1 endpoint
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str [Required]¶
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-1
|
Key word arguments to pass to the model.
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-2
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-3
|
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-4
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-5
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-6
|
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-7
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-8
|
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
135ca0f4bc84-9
|
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Bedrock¶
Bedrock
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
|
bbe039271308-0
|
langchain.llms.ai21.AI21PenaltyData¶
class langchain.llms.ai21.AI21PenaltyData[source]¶
Bases: BaseModel
Parameters for AI21 penalty data.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param applyToEmojis: bool = True¶
param applyToNumbers: bool = True¶
param applyToPunctuations: bool = True¶
param applyToStopwords: bool = True¶
param applyToWhitespaces: bool = True¶
param scale: int = 0¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
|
bbe039271308-1
|
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
|
bbe039271308-2
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
|
fe2c9d929317-0
|
langchain.llms.huggingface_hub.HuggingFaceHub¶
class langchain.llms.huggingface_hub.HuggingFaceHub[source]¶
Bases: LLM
HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param huggingfacehub_api_token: Optional[str] = None¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param repo_id: str = 'gpt2'¶
Model name to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param task: Optional[str] = None¶
Task to call the model with.
Should be a task that returns generated_text or summary_text.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
fe2c9d929317-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using HuggingFaceHub¶
Hugging Face
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
|
ee1d58e28f84-0
|
langchain_experimental.llms.anthropic_functions.AnthropicFunctions¶
class langchain_experimental.llms.anthropic_functions.AnthropicFunctions[source]¶
Bases: BaseChatModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
Whether to cache the response.
param callback_manager: Optional[BaseCallbackManager] = None¶
Callback manager to add to the run trace.
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model: langchain.chat_models.anthropic.ChatAnthropic [Required]¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult[source]¶
Top Level call
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-1
|
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-2
|
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[BaseMessageChunk]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-3
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-4
|
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-5
|
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-6
|
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
ee1d58e28f84-7
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
|
cee0c5c0a317-0
|
langchain.llms.bedrock.LLMInputOutputAdapter¶
class langchain.llms.bedrock.LLMInputOutputAdapter[source]¶
Adapter class to prepare the inputs from Langchain to a format
that LLM model expects.
It also provides helper function to extract
the generated text from the model response.
Methods
__init__()
prepare_input(provider, prompt, model_kwargs)
prepare_output(provider, response)
__init__()¶
classmethod prepare_input(provider: str, prompt: str, model_kwargs: Dict[str, Any]) → Dict[str, Any][source]¶
classmethod prepare_output(provider: str, response: Any) → str[source]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.LLMInputOutputAdapter.html
|
f99c31591771-0
|
langchain.llms.base.LLM¶
class langchain.llms.base.LLM[source]¶
Bases: BaseLLM
Base LLM abstract class.
The purpose of this class is to expose a simpler interface for working
with LLMs, rather than expect the user to implement the full _generate method.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-1
|
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-2
|
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-3
|
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-4
|
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-5
|
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-6
|
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-7
|
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
f99c31591771-8
|
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using LLM¶
Custom LLM
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
|
ceca5fb9e20c-0
|
langchain.llms.google_palm.GooglePalm¶
class langchain.llms.google_palm.GooglePalm[source]¶
Bases: BaseLLM, BaseModel
Google PaLM models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param google_api_key: Optional[str] = None¶
param max_output_tokens: Optional[int] = None¶
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_name: str = 'models/text-bison-001'¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
param top_k: Optional[int] = None¶
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
param top_p: Optional[float] = None¶
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
ceca5fb9e20c-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
|
d194c6a2cc45-0
|
langchain.llms.mosaicml.MosaicML¶
class langchain.llms.mosaicml.MosaicML[source]¶
Bases: LLM
MosaicML LLM service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicML
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict"
)
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'¶
Endpoint URL to use.
param inject_instruction_format: bool = False¶
Whether to inject the instruction format into the prompt.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param mosaicml_api_token: Optional[str] = None¶
param retry_sleep: float = 1.0¶
How long to try sleeping for if a rate limit is encountered
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
d194c6a2cc45-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using MosaicML¶
MosaicML
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
|
0409f808ee4e-0
|
langchain.llms.human.HumanInputLLM¶
class langchain.llms.human.HumanInputLLM[source]¶
Bases: LLM
It returns user input as the response.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param input_func: Callable [Optional]¶
param input_kwargs: Mapping[str, Any] = {}¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param prompt_func: Callable[[str], None] [Optional]¶
param prompt_kwargs: Mapping[str, Any] = {}¶
param separator: str = '\n'¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-1
|
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-2
|
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-3
|
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-4
|
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-5
|
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-6
|
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-7
|
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
0409f808ee4e-8
|
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using HumanInputLLM¶
Human input LLM
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
|
099af18a7cf4-0
|
langchain.llms.base.update_cache¶
langchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) → Optional[dict][source]¶
Update the cache and get the LLM output.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.update_cache.html
|
ce4987a62fe6-0
|
langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint¶
class langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint[source]¶
Bases: LLM, BaseModel
Azure ML Online Endpoint models.
Example
azure_llm = AzureMLOnlineEndpoint(
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
endpoint_api_key="my-api-key",
content_formatter=content_formatter,
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param content_formatter: Any = None¶
The content formatter that provides an input and output
transform function to handle formats between the LLM and
the endpoint
param deployment_name: str = ''¶
Deployment Name for Endpoint. NOT REQUIRED to call endpoint. Should be passed
to constructor or specified as env var AZUREML_DEPLOYMENT_NAME.
param endpoint_api_key: str = ''¶
Authentication Key for Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_API_KEY.
param endpoint_url: str = ''¶
URL of pre-existing Endpoint. Should be passed to constructor or specified as
env var AZUREML_ENDPOINT_URL.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
ce4987a62fe6-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using AzureMLOnlineEndpoint¶
AzureML Online Endpoint
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
|
c6f27e621a68-0
|
langchain.llms.azureml_endpoint.ContentFormatterBase¶
class langchain.llms.azureml_endpoint.ContentFormatterBase[source]¶
Transform request and response of AzureML endpoint to match with
required schema.
Attributes
accepts
The MIME type of the response data returned from the endpoint
content_type
The MIME type of the input data passed to the endpoint
Methods
__init__()
escape_special_characters(prompt)
Escapes any special characters in prompt
format_request_payload(prompt, model_kwargs)
Formats the request body according to the input schema of the model.
format_response_payload(output)
Formats the response body according to the output schema of the model.
__init__()¶
static escape_special_characters(prompt: str) → str[source]¶
Escapes any special characters in prompt
abstract format_request_payload(prompt: str, model_kwargs: Dict) → bytes[source]¶
Formats the request body according to the input schema of
the model. Returns bytes or seekable file like object in the
format specified in the content_type request header.
abstract format_response_payload(output: bytes) → str[source]¶
Formats the response body according to the output
schema of the model. Returns the data type that is
received from the response.
Examples using ContentFormatterBase¶
AzureML Online Endpoint
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.ContentFormatterBase.html
|
29c4bb3e3142-0
|
langchain.llms.azureml_endpoint.DollyContentFormatter¶
class langchain.llms.azureml_endpoint.DollyContentFormatter[source]¶
Content handler for the Dolly-v2-12b model
Attributes
accepts
The MIME type of the response data returned from the endpoint
content_type
The MIME type of the input data passed to the endpoint
Methods
__init__()
escape_special_characters(prompt)
Escapes any special characters in prompt
format_request_payload(prompt, model_kwargs)
Formats the request body according to the input schema of the model.
format_response_payload(output)
Formats the response body according to the output schema of the model.
__init__()¶
static escape_special_characters(prompt: str) → str¶
Escapes any special characters in prompt
format_request_payload(prompt: str, model_kwargs: Dict) → bytes[source]¶
Formats the request body according to the input schema of
the model. Returns bytes or seekable file like object in the
format specified in the content_type request header.
format_response_payload(output: bytes) → str[source]¶
Formats the response body according to the output
schema of the model. Returns the data type that is
received from the response.
Examples using DollyContentFormatter¶
AzureML Online Endpoint
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.DollyContentFormatter.html
|
fb48fdbb5666-0
|
langchain.llms.replicate.Replicate¶
class langchain.llms.replicate.Replicate[source]¶
Bases: LLM
Replicate models.
To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, …}
Example
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param input: Dict[str, Any] [Optional]¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model: str [Required]¶
param model_kwargs: Dict[str, Any] [Optional]¶
param replicate_api_token: Optional[str] = None¶
param stop: Optional[List[str]] = []¶
Stop sequences to early-terminate generation.
param streaming: bool = False¶
Whether to stream the results.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
fb48fdbb5666-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Replicate¶
Replicate
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
|
0ed94a45ce68-0
|
langchain.llms.aviary.AviaryBackend¶
class langchain.llms.aviary.AviaryBackend(backend_url: str, bearer: str)[source]¶
Attributes
backend_url
bearer
Methods
__init__(backend_url, bearer)
from_env()
__init__(backend_url: str, bearer: str) → None¶
classmethod from_env() → AviaryBackend[source]¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.AviaryBackend.html
|
ba1cb208b9b2-0
|
langchain.llms.cohere.Cohere¶
class langchain.llms.cohere.Cohere[source]¶
Bases: LLM
Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param cohere_api_key: Optional[str] = None¶
param frequency_penalty: float = 0.0¶
Penalizes repeated tokens according to frequency. Between 0 and 1.
param k: int = 0¶
Number of most likely tokens to consider at each step.
param max_retries: int = 10¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
Denotes the number of tokens to predict per generation.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model: Optional[str] = None¶
Model name to use.
param p: int = 1¶
Total probability mass of tokens to consider at each step.
param presence_penalty: float = 0.0¶
Penalizes repeated tokens. Between 0 and 1.
param stop: Optional[List[str]] = None¶
param tags: Optional[List[str]] = None¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
|
ba1cb208b9b2-1
|
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.75¶
A non-negative float that tunes the degree of randomness in generation.
param truncate: Optional[str] = None¶
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
|
ba1cb208b9b2-2
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
|
ba1cb208b9b2-3
|
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.