id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
45b46cec0462-29
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-30
Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-31
dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-32
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Aviary(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='amazon/LightGPT', aviary_url=None, aviary_token=None, use_prompt_format=True, version=None)[source] Bases: langchain.llms.base.LLM Allow you to use an Aviary. Aviary is a backend for hosted models. You can find out more about aviary at http://github.com/ray-project/aviary To get a list of the models supported on an aviary, follow the instructions on the web site to install the aviary CLI and then use: aviary models AVIARY_URL and AVIARY_TOKEN environement variables must be set. Example from langchain.llms import Aviary os.environ["AVIARY_URL"] = "<URL>" os.environ["AVIARY_TOKEN"] = "<TOKEN>" light = Aviary(model='amazon/LightGPT') output = light('How do you make fried rice?') Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – aviary_url (Optional[str]) – aviary_token (Optional[str]) – use_prompt_format (bool) – version (Optional[str]) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-33
Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-34
Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-35
Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-36
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-37
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.AzureMLOnlineEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', endpoint_api_key='', deployment_name='', http_client=None, content_formatter=None, model_kwargs=None)[source] Bases: langchain.llms.base.LLM, pydantic.main.BaseModel Wrapper around Azure ML Hosted models using Managed Online Endpoints. Example azure_llm = AzureMLModel( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_key="my-api-key", deployment_name="my-deployment-name", content_formatter=content_formatter, ) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – endpoint_url (str) – endpoint_api_key (str) – deployment_name (str) – http_client (Any) – content_formatter (Any) – model_kwargs (Optional[dict]) – Return type None attribute content_formatter: Any = None The content formatter that provides an input and output
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-38
attribute content_formatter: Any = None The content formatter that provides an input and output transform function to handle formats between the LLM and the endpoint attribute deployment_name: str = '' Deployment Name for Endpoint. Should be passed to constructor or specified as env var AZUREML_DEPLOYMENT_NAME. attribute endpoint_api_key: str = '' Authentication Key for Endpoint. Should be passed to constructor or specified as env var AZUREML_ENDPOINT_API_KEY. attribute endpoint_url: str = '' URL of pre-existing Endpoint. Should be passed to constructor or specified as env var AZUREML_ENDPOINT_URL. attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-39
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-40
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-41
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-42
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-43
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.AzureOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source] Bases: langchain.llms.openai.BaseOpenAI Wrapper around Azure-specific OpenAI large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import AzureOpenAI openai = AzureOpenAI(model_name="text-davinci-003") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – temperature (float) – max_tokens (int) – top_p (float) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-44
max_tokens (int) – top_p (float) – frequency_penalty (float) – presence_penalty (float) – n (int) – best_of (int) – model_kwargs (Dict[str, Any]) – openai_api_key (Optional[str]) – openai_api_base (Optional[str]) – openai_organization (Optional[str]) – openai_proxy (Optional[str]) – batch_size (int) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – logit_bias (Optional[Dict[str, float]]) – max_retries (int) – streaming (bool) – allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], typing.Collection[str]]) – tiktoken_model_name (Optional[str]) – deployment_name (str) – openai_api_type (str) – openai_api_version (str) – Return type None attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {} Set of special tokens that are allowed。 attribute batch_size: int = 20 Batch size to use when passing multiple documents to generate. attribute best_of: int = 1 Generates best_of completions server-side and returns the β€œbest”. attribute deployment_name: str = '' Deployment name to use. attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all' Set of special tokens that are not allowed。 attribute frequency_penalty: float = 0 Penalizes repeated tokens according to frequency. attribute logit_bias: Optional[Dict[str, float]] [Optional]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-45
attribute logit_bias: Optional[Dict[str, float]] [Optional] Adjust the probability of specific tokens being generated. attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute max_tokens: int = 256 The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute model_name: str = 'text-davinci-003' (alias 'model') Model name to use. attribute n: int = 1 How many completions to generate for each prompt. attribute presence_penalty: float = 0 Penalizes repeated tokens. attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout for requests to OpenAI completion API. Default is 600 seconds. attribute streaming: bool = False Whether to stream the results or not. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.7 What sampling temperature to use. attribute tiktoken_model_name: Optional[str] = None The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-46
supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. attribute top_p: float = 1 Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-47
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-48
self (Model) – Returns new model instance Return type Model create_llm_result(choices, prompts, token_usage) Create the LLMResult from the choices and prompts. Parameters choices (Any) – prompts (List[str]) – token_usage (Dict[str, int]) – Return type langchain.schema.LLMResult dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-49
Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_sub_prompts(params, prompts, stop=None) Get the sub prompts for llm call. Parameters params (Dict[str, Any]) – prompts (List[str]) – stop (Optional[List[str]]) – Return type List[List[str]] get_token_ids(text) Get the token IDs using the tiktoken package. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode max_tokens_for_prompt(prompt) Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt (str) – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Return type int Example max_tokens = openai.max_token_for_prompt("Tell me a joke.")
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-50
int Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname) Calculate the maximum number of tokens possible to generate for a model. Parameters modelname (str) – The modelname we want to know the context size for. Returns The maximum context size Return type int Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage prep_streaming_params(stop=None) Prepare the params for streaming. Parameters stop (Optional[List[str]]) – Return type Dict[str, Any] save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt, stop=None) Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt (str) – The prompts to pass into the model. stop (Optional[List[str]]) – Optional list of stop words to use when generating. Returns
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-51
stop (Optional[List[str]]) – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Return type Generator Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. property max_context_size: int Get max context size for this model. class langchain.llms.Banana(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_key='', model_kwargs=None, banana_api_key=None)[source] Bases: langchain.llms.base.LLM Wrapper around Banana large language models. To use, you should have the banana-dev python package installed, and the environment variable BANANA_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import Banana banana = Banana(model_key="") Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-52
Example from langchain.llms import Banana banana = Banana(model_key="") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model_key (str) – model_kwargs (Dict[str, Any]) – banana_api_key (Optional[str]) – Return type None attribute model_key: str = '' model endpoint to use attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-53
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-54
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-55
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-56
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Baseten(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None)[source] Bases: langchain.llms.base.LLM Use your Baseten models in Langchain To use, you should have the baseten python package installed, and run baseten.login() with your Baseten API key. The required model param can be either a model id or model version id. Using a model version ID will result in slightly faster invocation. Any other model parameters can also be passed in with the format input={model_param: value, …}
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-57
be passed in with the format input={model_param: value, …} The Baseten model must accept a dictionary of input with the key β€œprompt” and return a dictionary with a key β€œdata” which maps to a list of response strings. Example Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – input (Dict[str, Any]) – model_kwargs (Dict[str, Any]) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-58
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-59
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-60
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-61
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Beam(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_name='', name='', cpu='', memory='', gpu='', python_version='', python_packages=[], max_length='', url='', model_kwargs=None, beam_client_id='', beam_client_secret='', app_id=None)[source] Bases: langchain.llms.base.LLM Wrapper around Beam API for gpt2 large language model. To use, you should have the beam-sdk python package installed, and the environment variable BEAM_CLIENT_ID set with your client id and BEAM_CLIENT_SECRET set with your client secret. Information on how to get these is available here: https://docs.beam.cloud/account/api-keys.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-62
to get these is available here: https://docs.beam.cloud/account/api-keys. The wrapper can then be called as follows, where the name, cpu, memory, gpu, python version, and python packages can be updated accordingly. Once deployed, the instance can be called. Example llm = Beam(model_name="gpt2", name="langchain-gpt2", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length=50) llm._deploy() call_result = llm._call(input) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model_name (str) – name (str) – cpu (str) – memory (str) – gpu (str) – python_version (str) – python_packages (List[str]) – max_length (str) – url (str) – model_kwargs (Dict[str, Any]) – beam_client_id (str) – beam_client_secret (str) – app_id (Optional[str]) – Return type None attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-63
Holds any model parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute url: str = '' model endpoint to use attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult app_creation()[source] Creates a Python file which will contain your Beam app definition. Return type None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-64
Creates a Python file which will contain your Beam app definition. Return type None async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-65
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-66
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage run_creation()[source] Creates a Python file which will be deployed on beam. Return type None save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-67
Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Bedrock(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, region_name=None, credentials_profile_name=None, model_id, model_kwargs=None)[source] Bases: langchain.llms.base.LLM LLM provider to invoke Bedrock models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – region_name (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-68
client (Any) – region_name (Optional[str]) – credentials_profile_name (Optional[str]) – model_id (str) – model_kwargs (Optional[Dict]) – Return type None attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute model_id: str [Required] Id of the model to call, e.g., amazon.titan-tg1-large, this is equivalent to the modelId property in the list-foundation-models api attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: Optional[str] = None The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-69
kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-70
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-71
Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-72
dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-73
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.CTransformers(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model, model_type=None, model_file=None, config=None, lib=None)[source] Bases: langchain.llms.base.LLM Wrapper around the C Transformers LLM interface. To use, you should have the ctransformers python package installed. See https://github.com/marella/ctransformers Example from langchain.llms import CTransformers llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – model_type (Optional[str]) – model_file (Optional[str]) – config (Optional[Dict[str, Any]]) – lib (Optional[str]) – Return type None attribute config: Optional[Dict[str, Any]] = None The config parameters. See https://github.com/marella/ctransformers#config attribute lib: Optional[str] = None The path to a shared library or one of avx2, avx, basic. attribute model: str [Required] The path to a model file or directory or the name of a Hugging Face Hub model repo. attribute model_file: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-74
model repo. attribute model_file: Optional[str] = None The name of the model file in repo or directory. attribute model_type: Optional[str] = None The model type. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-75
async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-76
Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict().
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-77
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-78
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.CerebriumAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None, cerebriumai_api_key=None)[source] Bases: langchain.llms.base.LLM Wrapper around CerebriumAI large language models. To use, you should have the cerebrium python package installed, and the environment variable CEREBRIUMAI_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import CerebriumAI cerebrium = CerebriumAI(endpoint_url="") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – endpoint_url (str) – model_kwargs (Dict[str, Any]) – cerebriumai_api_key (Optional[str]) – Return type None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-79
cerebriumai_api_key (Optional[str]) – Return type None attribute endpoint_url: str = '' model endpoint to use attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-80
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-81
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-82
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-83
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Clarifai(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, stub=None, metadata=None, userDataObject=None, model_id=None, model_version_id=None, app_id=None, user_id=None, clarifai_pat_key=None, api_base='https://api.clarifai.com', stop=None)[source] Bases: langchain.llms.base.LLM Wrapper around Clarifai’s large language models. To use, you should have an account on the Clarifai platform, the clarifai python package installed, and the environment variable CLARIFAI_PAT_KEY set with your PAT key, or pass it as a named parameter to the constructor. Example from langchain.llms import Clarifai clarifai_llm = Clarifai(clarifai_pat_key=CLARIFAI_PAT_KEY, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-84
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – stub (Any) – metadata (Any) – userDataObject (Any) – model_id (Optional[str]) – model_version_id (Optional[str]) – app_id (Optional[str]) – user_id (Optional[str]) – clarifai_pat_key (Optional[str]) – api_base (str) – stop (Optional[List[str]]) – Return type None attribute app_id: Optional[str] = None Clarifai application id to use. attribute model_id: Optional[str] = None Model id to use. attribute model_version_id: Optional[str] = None Model version id to use. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute user_id: Optional[str] = None Clarifai user id to use. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-85
Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-86
Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-87
kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-88
Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Cohere(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model=None, max_tokens=256, temperature=0.75, k=0, p=1, frequency_penalty=0.0, presence_penalty=0.0, truncate=None, max_retries=10, cohere_api_key=None, stop=None)[source] Bases: langchain.llms.base.LLM Wrapper around Cohere large language models.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-89
Bases: langchain.llms.base.LLM Wrapper around Cohere large language models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain.llms import Cohere cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (Optional[str]) – max_tokens (int) – temperature (float) – k (int) – p (int) – frequency_penalty (float) – presence_penalty (float) – truncate (Optional[str]) – max_retries (int) – cohere_api_key (Optional[str]) – stop (Optional[List[str]]) – Return type None attribute frequency_penalty: float = 0.0 Penalizes repeated tokens according to frequency. Between 0 and 1. attribute k: int = 0 Number of most likely tokens to consider at each step. attribute max_retries: int = 10 Maximum number of retries to make when generating. attribute max_tokens: int = 256 Denotes the number of tokens to predict per generation. attribute model: Optional[str] = None Model name to use. attribute p: int = 1
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-90
Model name to use. attribute p: int = 1 Total probability mass of tokens to consider at each step. attribute presence_penalty: float = 0.0 Penalizes repeated tokens. Between 0 and 1. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.75 A non-negative float that tunes the degree of randomness in generation. attribute truncate: Optional[str] = None Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NONE attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-91
Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-92
the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-93
Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-94
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Databricks(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, host=None, api_token=None, endpoint_name=None, cluster_id=None, cluster_driver_port=None, model_kwargs=None, transform_input_fn=None, transform_output_fn=None)[source] Bases: langchain.llms.base.LLM LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app. It supports two endpoint types: Serving endpoint (recommended for both production and development). We assume that an LLM was registered and deployed to a serving endpoint. To wrap it as an LLM you must have β€œCan Query” permission to the endpoint. Set endpoint_name accordingly and do not set cluster_id and cluster_driver_port. The expected model signature is: inputs: [{"name": "prompt", "type": "string"},
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-95
inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}] outputs: [{"type": "string"}] Cluster driver proxy app (recommended for interactive development). One can load an LLM on a Databricks interactive cluster and start a local HTTP server on the driver node to serve the model at / using HTTP POST method with JSON input/output. Please use a port number between [3000, 8000] and let the server listen to the driver IP address or simply 0.0.0.0 instead of localhost only. To wrap it as an LLM you must have β€œCan Attach To” permission to the cluster. Set cluster_id and cluster_driver_port and do not set endpoint_name. The expected server schema (using JSON schema) is: inputs: {"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]}` outputs: {"type": "string"} If the endpoint model signature is different or you want to set extra params, you can use transform_input_fn and transform_output_fn to apply necessary transformations before and after the query. Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – host (str) – api_token (str) – endpoint_name (Optional[str]) – cluster_id (Optional[str]) – cluster_driver_port (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-96
cluster_id (Optional[str]) – cluster_driver_port (Optional[str]) – model_kwargs (Optional[Dict[str, Any]]) – transform_input_fn (Optional[Callable]) – transform_output_fn (Optional[Callable[[...], str]]) – Return type None attribute api_token: str [Optional] Databricks personal access token. If not provided, the default value is determined by the DATABRICKS_TOKEN environment variable if present, or an automatically generated temporary token if running inside a Databricks notebook attached to an interactive cluster in β€œsingle user” or β€œno isolation shared” mode. attribute cluster_driver_port: Optional[str] = None The port number used by the HTTP server running on the cluster driver node. The server should listen on the driver IP address or simply 0.0.0.0 to connect. We recommend the server using a port number between [3000, 8000]. attribute cluster_id: Optional[str] = None ID of the cluster if connecting to a cluster driver proxy app. If neither endpoint_name nor cluster_id is not provided and the code runs inside a Databricks notebook attached to an interactive cluster in β€œsingle user” or β€œno isolation shared” mode, the current cluster ID is used as default. You must not set both endpoint_name and cluster_id. attribute endpoint_name: Optional[str] = None Name of the model serving endpont. You must specify the endpoint name to connect to a model serving endpoint. You must not set both endpoint_name and cluster_id. attribute host: str [Optional] Databricks workspace hostname. If not provided, the default value is determined by the DATABRICKS_HOST environment variable if present, or
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-97
the DATABRICKS_HOST environment variable if present, or the hostname of the current Databricks workspace if running inside a Databricks notebook attached to an interactive cluster in β€œsingle user” or β€œno isolation shared” mode. attribute model_kwargs: Optional[Dict[str, Any]] = None Extra parameters to pass to the endpoint. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute transform_input_fn: Optional[Callable] = None A function that transforms {prompt, stop, **kwargs} into a JSON-compatible request object that the endpoint accepts. For example, you can apply a prompt template to the input prompt. attribute transform_output_fn: Optional[Callable[[...], str]] = None A function that transforms the output from the endpoint to the generated text. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-98
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-99
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-100
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-101
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.DeepInfra(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_id='google/flan-t5-xl', model_kwargs=None, deepinfra_api_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around DeepInfra deployed models. To use, you should have the requests python package installed, and the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Example from langchain.llms import DeepInfra di = DeepInfra(model_id="google/flan-t5-xl",
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-102
di = DeepInfra(model_id="google/flan-t5-xl", deepinfra_api_token="my-api-key") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model_id (str) – model_kwargs (Optional[dict]) – deepinfra_api_token (Optional[str]) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-103
Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-104
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-105
get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-106
Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.FakeListLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source] Bases: langchain.llms.base.LLM Fake LLM wrapper for testing purposes. Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – responses (List) – i (int) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-107
attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-108
Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-109
Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-110
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-111
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.ForefrontAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', temperature=0.7, length=256, top_p=1.0, top_k=40, repetition_penalty=1, forefrontai_api_key=None, base_url=None)[source] Bases: langchain.llms.base.LLM Wrapper around ForefrontAI large language models. To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key. Example from langchain.llms import ForefrontAI forefrontai = ForefrontAI(endpoint_url="") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – endpoint_url (str) – temperature (float) – length (int) – top_p (float) – top_k (int) – repetition_penalty (int) – forefrontai_api_key (Optional[str]) – base_url (Optional[str]) – Return type None attribute base_url: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-112
Return type None attribute base_url: Optional[str] = None Base url to use, if None decides based on model name. attribute endpoint_url: str = '' Model name to use. attribute length: int = 256 The maximum number of tokens to generate in the completion. attribute repetition_penalty: int = 1 Penalizes repeated tokens according to frequency. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.7 What sampling temperature to use. attribute top_k: int = 40 The number of highest probability vocabulary tokens to keep for top-k-filtering. attribute top_p: float = 1.0 Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-113
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-114
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-115
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-116
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.GPT4All(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, backend=None, n_ctx=512, n_parts=- 1, seed=0, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, embedding=False, n_threads=4, n_predict=256, temp=0.8, top_p=0.95, top_k=40, echo=False, stop=[], repeat_last_n=64, repeat_penalty=1.3, n_batch=1, streaming=False, context_erase=0.5, allow_download=False, client=None)[source] Bases: langchain.llms.base.LLM
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-117
Bases: langchain.llms.base.LLM Wrapper around GPT4All language models. To use, you should have the gpt4all python package installed, the pre-trained model file, and the model’s config information. Example from langchain.llms import GPT4All model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8) # Simplest invocation response = model("Once upon a time, ") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – backend (Optional[str]) – n_ctx (int) – n_parts (int) – seed (int) – f16_kv (bool) – logits_all (bool) – vocab_only (bool) – use_mlock (bool) – embedding (bool) – n_threads (Optional[int]) – n_predict (Optional[int]) – temp (Optional[float]) – top_p (Optional[float]) – top_k (Optional[int]) – echo (Optional[bool]) – stop (Optional[List[str]]) – repeat_last_n (Optional[int]) – repeat_penalty (Optional[float]) – n_batch (int) – streaming (bool) – context_erase (float) – allow_download (bool) – client (Any) – Return type None attribute allow_download: bool = False
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-118
Return type None attribute allow_download: bool = False If model does not exist in ~/.cache/gpt4all/, download it. attribute context_erase: float = 0.5 Leave (n_ctx * context_erase) tokens starting from beginning if the context has run out. attribute echo: Optional[bool] = False Whether to echo the prompt. attribute embedding: bool = False Use embedding mode only. attribute f16_kv: bool = False Use half-precision for key/value cache. attribute logits_all: bool = False Return logits for all tokens, not just the last token. attribute model: str [Required] Path to the pre-trained GPT4All model file. attribute n_batch: int = 1 Batch size for prompt processing. attribute n_ctx: int = 512 Token context window. attribute n_parts: int = -1 Number of parts to split the model into. If -1, the number of parts is automatically determined. attribute n_predict: Optional[int] = 256 The maximum number of tokens to generate. attribute n_threads: Optional[int] = 4 Number of threads to use. attribute repeat_last_n: Optional[int] = 64 Last n tokens to penalize attribute repeat_penalty: Optional[float] = 1.3 The penalty to apply to repeated tokens. attribute seed: int = 0 Seed. If -1, a random seed is used. attribute stop: Optional[List[str]] = [] A list of strings to stop generation when encountered. attribute streaming: bool = False Whether to stream the results or not. attribute tags: Optional[List[str]] = None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-119
attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temp: Optional[float] = 0.8 The temperature to use for sampling. attribute top_k: Optional[int] = 40 The top-k value to use for sampling. attribute top_p: Optional[float] = 0.95 The top-p value to use for sampling. attribute use_mlock: bool = False Force system to keep model in RAM. attribute verbose: bool [Optional] Whether to print out response text. attribute vocab_only: bool = False Only load the vocabulary, no weights. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-120
Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-121
the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-122
Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-123
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.GooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, google_api_key=None, model_name='models/text-bison-001', temperature=0.7, top_p=None, top_k=None, max_output_tokens=None, n=1)[source] Bases: langchain.llms.base.BaseLLM, pydantic.main.BaseModel Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – google_api_key (Optional[str]) – model_name (str) – temperature (float) – top_p (Optional[float]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-124
temperature (float) – top_p (Optional[float]) – top_k (Optional[int]) – max_output_tokens (Optional[int]) – n (int) – Return type None attribute max_output_tokens: Optional[int] = None Maximum number of tokens to include in a candidate. Must be greater than zero. If unset, will default to 64. attribute model_name: str = 'models/text-bison-001' Model name to use. attribute n: int = 1 Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.7 Run inference with this temperature. Must by in the closed interval [0.0, 1.0]. attribute top_k: Optional[int] = None Decode using top-k sampling: consider the set of top_k most probable tokens. Must be positive. attribute top_p: Optional[float] = None Decode using nucleus sampling: consider the smallest set of tokens whose probability sum is at least top_p. Must be in the closed interval [0.0, 1.0]. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-125
kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-126
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-127
Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-128
dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html