id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
61
160
6bf9d6959fce-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
6bf9d6959fce-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls,
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
6bf9d6959fce-5
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
6bf9d6959fce-6
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
6bf9d6959fce-7
Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that AWS credentials to and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
af75b384aff3-0
langchain.llms.base.BaseLLM¶ class langchain.llms.base.BaseLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseLanguageModel[str], ABC Base LLM abstract interface. It should take in a prompt and return a string. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str[source]¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-1
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str][source]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult[source]¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult[source]¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-2
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str[source]¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-3
stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str][source]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str][source]¶ dict(**kwargs: Any) → Dict[source]¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult[source]¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult[source]¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-4
This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-5
Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str[source]¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields[source]¶ Raise deprecation warning if callback_manager is used.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
af75b384aff3-6
Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None[source]¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose[source]¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str][source]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using BaseLLM¶ BabyAGI User Guide BabyAGI with Tools SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.BaseLLM.html
1d5742d1a8ea-0
langchain.llms.promptlayer_openai.PromptLayerOpenAIChat¶ class langchain.llms.promptlayer_openai.PromptLayerOpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶ Bases: OpenAIChat Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAIChat
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-1
Generation object. Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param max_retries: int = 6¶ Maximum number of retries to make when generating. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-3.5-turbo'¶ Model name to use. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param pl_tags: Optional[List[str]] = None¶ param prefix_messages: List [Optional]¶ Series of messages for Chat input. param return_pl_id: Optional[bool] = False¶ param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-2
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-3
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-4
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-5
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-6
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
1d5742d1a8ea-7
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
ecab9bd40d73-0
langchain.llms.mlflow_ai_gateway.Params¶ class langchain.llms.mlflow_ai_gateway.Params(*, temperature: float = 0.0, candidate_count: int = 1, stop: Optional[List[str]] = None, max_tokens: Optional[int] = None, **extra_data: Any)[source]¶ Bases: BaseModel Parameters for the MLflow AI Gateway LLM. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param candidate_count: int = 1¶ The number of candidates to return. param max_tokens: Optional[int] = None¶ param stop: Optional[List[str]] = None¶ param temperature: float = 0.0¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.mlflow_ai_gateway.Params.html
e28ddf79c9e3-0
langchain.llms.azureml_endpoint.OSSContentFormatter¶ class langchain.llms.azureml_endpoint.OSSContentFormatter[source]¶ Bases: GPT2ContentFormatter Deprecated: Kept for backwards compatibility Content handler for LLMs from the OSS catalog. Methods __init__() escape_special_characters(prompt) Escapes any special characters in prompt format_request_payload(prompt, model_kwargs) Formats the request body according to the input schema of the model. format_response_payload(output) Formats the response body according to the output schema of the model. Attributes accepts The MIME type of the response data returned from the endpoint content_formatter content_type The MIME type of the input data passed to the endpoint static escape_special_characters(prompt: str) → str¶ Escapes any special characters in prompt format_request_payload(prompt: str, model_kwargs: Dict) → bytes¶ Formats the request body according to the input schema of the model. Returns bytes or seekable file like object in the format specified in the content_type request header. format_response_payload(output: bytes) → str¶ Formats the response body according to the output schema of the model. Returns the data type that is received from the response. accepts: Optional[str] = 'application/json'¶ The MIME type of the response data returned from the endpoint content_formatter: Any = None¶ content_type: Optional[str] = 'application/json'¶ The MIME type of the input data passed to the endpoint
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.OSSContentFormatter.html
37d192d65f9e-0
langchain.llms.beam.Beam¶ class langchain.llms.beam.Beam(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_name: str = '', name: str = '', cpu: str = '', memory: str = '', gpu: str = '', python_version: str = '', python_packages: List[str] = [], max_length: str = '', url: str = '', model_kwargs: Dict[str, Any] = None, beam_client_id: str = '', beam_client_secret: str = '', app_id: Optional[str] = None)[source]¶ Bases: LLM Beam API for gpt2 large language model. To use, you should have the beam-sdk python package installed, and the environment variable BEAM_CLIENT_ID set with your client id and BEAM_CLIENT_SECRET set with your client secret. Information on how to get this is available here: https://docs.beam.cloud/account/api-keys. The wrapper can then be called as follows, where the name, cpu, memory, gpu, python version, and python packages can be updated accordingly. Once deployed, the instance can be called. Example llm = Beam(model_name="gpt2", name="langchain-gpt2", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors",
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-1
"pillow", "accelerate", "safetensors", "xformers",], max_length=50) llm._deploy() call_result = llm._call(input) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param app_id: Optional[str] = None¶ param beam_client_id: str = ''¶ param beam_client_secret: str = ''¶ param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param cpu: str = ''¶ param gpu: str = ''¶ param max_length: str = ''¶ param memory: str = ''¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = ''¶ param name: str = ''¶ param python_packages: List[str] = []¶ param python_version: str = ''¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param url: str = ''¶ model endpoint to use param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ app_creation() → None[source]¶ Creates a Python file which will contain your Beam app definition. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-4
Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-5
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-6
Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used.
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
37d192d65f9e-7
Raise deprecation warning if callback_manager is used. run_creation() → None[source]¶ Creates a Python file which will be deployed on beam. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property authorization: str¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶ Examples using Beam¶ Beam
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
03b7985d10f5-0
langchain.llms.replicate.Replicate¶ class langchain.llms.replicate.Replicate(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, input: Dict[str, Any] = None, model_kwargs: Dict[str, Any] = None, replicate_api_token: Optional[str] = None, streaming: bool = False, stop: Optional[List[str]] = [])[source]¶ Bases: LLM Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example from langchain.llms import Replicate replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478", input={"image_dimensions": "512x512"}) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param input: Dict[str, Any] [Optional]¶ param metadata: Optional[Dict[str, Any]] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-1
param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: str [Required]¶ param model_kwargs: Dict[str, Any] [Optional]¶ param replicate_api_token: Optional[str] = None¶ param stop: Optional[List[str]] = []¶ Stop sequences to early-terminate generation. param streaming: bool = False¶ Whether to stream the results. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-4
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-5
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-6
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
03b7985d10f5-7
Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶ Examples using Replicate¶ Replicate
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
0e137d2cdb10-0
langchain.llms.human.HumanInputLLM¶ class langchain.llms.human.HumanInputLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_func: Callable = None, prompt_func: Callable[[str], None] = None, separator: str = '\n', input_kwargs: Mapping[str, Any] = {}, prompt_kwargs: Mapping[str, Any] = {})[source]¶ Bases: LLM It returns user input as the response. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param input_func: Callable [Optional]¶ param input_kwargs: Mapping[str, Any] = {}¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param prompt_func: Callable[[str], None] [Optional]¶ param prompt_kwargs: Mapping[str, Any] = {}¶ param separator: str = '\n'¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-4
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-5
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-6
Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0e137d2cdb10-7
Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using HumanInputLLM¶ Human input LLM
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
167ef5826a63-0
langchain.llms.openllm.IdentifyingParams¶ class langchain.llms.openllm.IdentifyingParams[source]¶ Bases: TypedDict Parameters for identifying a model as a typed dict. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() Attributes model_name model_id server_url server_type embedded llm_kwargs clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html
167ef5826a63-1
items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values¶ embedded: bool¶ llm_kwargs: Dict[str, Any]¶ model_id: Optional[str]¶ model_name: str¶ server_type: Optional[Literal['http', 'grpc']]¶ server_url: Optional[str]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html
98a74f127f4f-0
langchain.llms.rwkv.RWKV¶ class langchain.llms.rwkv.RWKV(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model: str, tokens_path: str, strategy: str = 'cpu fp32', rwkv_verbose: bool = True, temperature: float = 1.0, top_p: float = 0.5, penalty_alpha_frequency: float = 0.4, penalty_alpha_presence: float = 0.4, CHUNK_LEN: int = 256, max_tokens_per_generation: int = 256, client: Any = None, tokenizer: Any = None, pipeline: Any = None, model_tokens: Any = None, model_state: Any = None)[source]¶ Bases: LLM, BaseModel RWKV language models. To use, you should have the rwkv python package installed, the pre-trained model file, and the model’s config information. Example from langchain.llms import RWKV model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32") # Simplest invocation response = model("Once upon a time, ") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param CHUNK_LEN: int = 256¶ Batch size for prompt processing. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param max_tokens_per_generation: int = 256¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-1
param max_tokens_per_generation: int = 256¶ Maximum number of tokens to generate. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: str [Required]¶ Path to the pre-trained RWKV model file. param penalty_alpha_frequency: float = 0.4¶ Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.. param penalty_alpha_presence: float = 0.4¶ Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.. param rwkv_verbose: bool = True¶ Print debug information. param strategy: str = 'cpu fp32'¶ Token context window. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 1.0¶ The temperature to use for sampling. param tokens_path: str [Required]¶ Path to the RWKV tokens file. param top_p: float = 0.5¶ The top-p value to use for sampling. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls,
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-5
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-6
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run_rnn(_tokens: List[str], newline_adj: int = 0) → Any[source]¶ rwkv_generate(prompt: str) → str[source]¶ save(file_path: Union[Path, str]) → None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
98a74f127f4f-7
save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that the python package exists in the environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶ Examples using RWKV¶ RWKV-4
https://api.python.langchain.com/en/latest/llms/langchain.llms.rwkv.RWKV.html
d86e98b6c470-0
langchain.llms.gooseai.GooseAI¶ class langchain.llms.gooseai.GooseAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-neo-20b', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, min_tokens: int = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, model_kwargs: Dict[str, Any] = None, logit_bias: Optional[Dict[str, float]] = None, gooseai_api_key: Optional[str] = None)[source]¶ Bases: LLM GooseAI large language models. To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import GooseAI gooseai = GooseAI(model_name="gpt-neo-20b") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param client: Any = None¶ param frequency_penalty: float = 0¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-1
param client: Any = None¶ param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param gooseai_api_key: Optional[str] = None¶ param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param min_tokens: int = 1¶ The minimum number of tokens to generate in the completion. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-neo-20b'¶ Model name to use param n: int = 1¶ How many completions to generate for each prompt. param presence_penalty: float = 0¶ Penalizes repeated tokens. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API.
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-5
This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-6
Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
d86e98b6c470-7
save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'ignore'¶ Examples using GooseAI¶ GooseAI
https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
49819628ecfb-0
langchain.llms.openai.BaseOpenAI¶ class langchain.llms.openai.BaseOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]¶ Bases: BaseLLM Base OpenAI large language model class. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param batch_size: int = 20¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-1
Set of special tokens that are allowed。 param batch_size: int = 20¶ Batch size to use when passing multiple documents to generate. param best_of: int = 1¶ Generates best_of completions server-side and returns the “best”. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_retries: int = 6¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'text-davinci-003' (alias 'model')¶ Model name to use. param n: int = 1¶ How many completions to generate for each prompt. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_organization: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param presence_penalty: float = 0¶ Penalizes repeated tokens.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-2
param presence_penalty: float = 0¶ Penalizes repeated tokens. param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ Timeout for requests to OpenAI completion API. Default is 600 seconds. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-3
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-4
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-5
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult[source]¶ Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-6
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-7
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]][source]¶ Get the sub prompts for llm call. get_token_ids(text: str) → List[int][source]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ max_tokens_for_prompt(prompt: str) → int[source]¶ Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname: str) → int[source]¶ Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-8
Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
49819628ecfb-9
to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property max_context_size: int¶ Get max context size for this model. model Config[source]¶ Bases: object Configuration for this pydantic object. allow_population_by_field_name = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
dbb35743ae7e-0
langchain.llms.base.update_cache¶ langchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) → Optional[dict][source]¶ Update the cache and get the LLM output.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.update_cache.html
9a7c0a9496ec-0
langchain_experimental.llms.anthropic_functions.TagParser¶ class langchain_experimental.llms.anthropic_functions.TagParser[source]¶ Bases: HTMLParser A heavy-handed solution, but it’s fast for prototyping. Might be re-implemented later to restrict scope to the limited grammar, and more efficiency. Uses an HTML parser to parse a limited grammar that allows for syntax of the form: INPUT -> JUNK? VALUE* JUNK -> JUNK_CHARACTER+ JUNK_CHARACTER -> whitespace | , VALUE -> <IDENTIFIER>DATA</IDENTIFIER> | OBJECT OBJECT -> <IDENTIFIER>VALUE+</IDENTIFIER> IDENTIFIER -> [a-Z][a-Z0-9_]* DATA -> .* Interprets the data to allow repetition of tags and recursion to support representation of complex types. ^ Just another approximately wrong grammar specification. Methods __init__() A heavy-handed solution, but it's fast for prototyping. check_for_whole_start_tag(i) clear_cdata_mode() close() Handle any buffered data. feed(data) Feed data to the parser. get_starttag_text() Return full source of start tag: '<...>'. getpos() Return current line number and offset. goahead(end) handle_charref(name) handle_comment(data) handle_data(data) Hook when handling data. handle_decl(decl) handle_endtag(tag) Hook when a tag is closed. handle_entityref(name) handle_pi(data) handle_startendtag(tag, attrs) handle_starttag(tag, attrs) Hook when a new tag is encountered. parse_bogus_comment(i[, report]) parse_comment(i[, report]) parse_declaration(i) parse_endtag(i) parse_html_declaration(i)
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
9a7c0a9496ec-1
parse_declaration(i) parse_endtag(i) parse_html_declaration(i) parse_marked_section(i[, report]) parse_pi(i) parse_starttag(i) reset() Reset this instance. set_cdata_mode(elem) unknown_decl(data) updatepos(i, j) Attributes CDATA_CONTENT_ELEMENTS check_for_whole_start_tag(i)¶ clear_cdata_mode()¶ close()¶ Handle any buffered data. feed(data)¶ Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include ‘n’). get_starttag_text()¶ Return full source of start tag: ‘<…>’. getpos()¶ Return current line number and offset. goahead(end)¶ handle_charref(name)¶ handle_comment(data)¶ handle_data(data: str) → None[source]¶ Hook when handling data. handle_decl(decl)¶ handle_endtag(tag: str) → None[source]¶ Hook when a tag is closed. handle_entityref(name)¶ handle_pi(data)¶ handle_startendtag(tag, attrs)¶ handle_starttag(tag: str, attrs: Any) → None[source]¶ Hook when a new tag is encountered. parse_bogus_comment(i, report=1)¶ parse_comment(i, report=1)¶ parse_declaration(i)¶ parse_endtag(i)¶ parse_html_declaration(i)¶ parse_marked_section(i, report=1)¶ parse_pi(i)¶ parse_starttag(i)¶ reset()¶ Reset this instance. Loses all unprocessed data. set_cdata_mode(elem)¶ unknown_decl(data)¶ updatepos(i, j)¶
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
9a7c0a9496ec-2
unknown_decl(data)¶ updatepos(i, j)¶ CDATA_CONTENT_ELEMENTS = ('script', 'style')¶
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
72999d66b632-0
langchain.llms.vertexai.VertexAI¶ class langchain.llms.vertexai.VertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: '_LanguageModel' = None, model_name: str = 'text-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, max_retries: int = 6, tuned_model_name: Optional[str] = None)[source]¶ Bases: _VertexAICommon, LLM Google Vertex AI large language models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param credentials: Any = None¶ The default custom credentials (google.auth.credentials.Credentials) to use param location: str = 'us-central1'¶ The default location to use when making API calls. param max_output_tokens: int = 128¶ Token limit determines the maximum amount of text output from one prompt. param max_retries: int = 6¶ The maximum number of retries to make when generating. param metadata: Optional[Dict[str, Any]] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-1
param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_name: str = 'text-bison'¶ The name of the Vertex AI large language model. param project: Optional[str] = None¶ The default GCP project to use when making Vertex API calls. param request_parallelism: int = 5¶ The amount of parallelism allowed for requests issued to VertexAI models. param stop: Optional[List[str]] = None¶ Optional list of stop words to use when generating. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.0¶ Sampling temperature, it controls the degree of randomness in token selection. param top_k: int = 40¶ How the model selects tokens for output, the next token is selected from param top_p: float = 0.95¶ Tokens are selected from most probable to least until the sum of their param tuned_model_name: Optional[str] = None¶ The name of a tuned model. If provided, model_name is ignored. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-2
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-3
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-4
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-5
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction.
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-6
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting.
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
72999d66b632-7
This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that the python package exists in environment. property is_codey_model: bool¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. task_executor: ClassVar[Optional[Executor]] = None¶ model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using VertexAI¶ Google Cloud Platform Vertex AI PaLM
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
9cce172d4b9c-0
langchain.llms.chatglm.ChatGLM¶ class langchain.llms.chatglm.ChatGLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = 'http://127.0.0.1:8000/', model_kwargs: Optional[dict] = None, max_token: int = 20000, temperature: float = 0.1, history: List[List] = [], top_p: float = 0.7, with_history: bool = False)[source]¶ Bases: LLM ChatGLM LLM service. Example from langchain.llms import ChatGLM endpoint_url = ( "http://127.0.0.1:8000" ) ChatGLM_llm = ChatGLM( endpoint_url=endpoint_url ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param endpoint_url: str = 'http://127.0.0.1:8000/'¶ Endpoint URL to use. param history: List[List] = []¶ History of the conversation param max_token: int = 20000¶ Max token allowed to pass to the model. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Optional[dict] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-1
param model_kwargs: Optional[dict] = None¶ Key word arguments to pass to the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.1¶ LLM model temperature from 0 to 10. param top_p: float = 0.7¶ Top P for nucleus sampling from 0 to 1 param verbose: bool [Optional]¶ Whether to print out response text. param with_history: bool = False¶ Whether to use history or not __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-4
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-5
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-6
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
9cce172d4b9c-7
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using ChatGLM¶ ChatGLM
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
e19d765d4255-0
langchain.docstore.base.Docstore¶ class langchain.docstore.base.Docstore[source]¶ Bases: ABC Interface to access to place that stores documents. Methods __init__() search(search) Search for document. abstract search(search: str) → Union[str, Document][source]¶ Search for document. If page exists, return the page summary, and a Document object. If page does not exist, return similar entries.
https://api.python.langchain.com/en/latest/docstore/langchain.docstore.base.Docstore.html
b1fc0d19ed69-0
langchain.docstore.wikipedia.Wikipedia¶ class langchain.docstore.wikipedia.Wikipedia[source]¶ Bases: Docstore Wrapper around wikipedia API. Check that wikipedia package is installed. Methods __init__() Check that wikipedia package is installed. search(search) Try to search for wiki page. search(search: str) → Union[str, Document][source]¶ Try to search for wiki page. If page exists, return the page summary, and a PageWithLookups object. If page does not exist, return similar entries. Parameters search – search string. Returns: a Document object or error message.
https://api.python.langchain.com/en/latest/docstore/langchain.docstore.wikipedia.Wikipedia.html
951f66dbec17-0
langchain.docstore.in_memory.InMemoryDocstore¶ class langchain.docstore.in_memory.InMemoryDocstore(_dict: Optional[Dict[str, Document]] = None)[source]¶ Bases: Docstore, AddableMixin Simple in memory docstore in the form of a dict. Initialize with dict. Methods __init__([_dict]) Initialize with dict. add(texts) Add texts to in memory dictionary. search(search) Search via direct lookup. add(texts: Dict[str, Document]) → None[source]¶ Add texts to in memory dictionary. Parameters texts – dictionary of id -> document. Returns None search(search: str) → Union[str, Document][source]¶ Search via direct lookup. Parameters search – id of a document to search for. Returns Document if found, else error message. Examples using InMemoryDocstore¶ Annoy AutoGPT BabyAGI User Guide BabyAGI with Tools !pip install bs4 Generative Agents in LangChain
https://api.python.langchain.com/en/latest/docstore/langchain.docstore.in_memory.InMemoryDocstore.html
0cf8fc49671c-0
langchain.docstore.base.AddableMixin¶ class langchain.docstore.base.AddableMixin[source]¶ Bases: ABC Mixin class that supports adding texts. Methods __init__() add(texts) Add more documents. abstract add(texts: Dict[str, Document]) → None[source]¶ Add more documents.
https://api.python.langchain.com/en/latest/docstore/langchain.docstore.base.AddableMixin.html
1fc72a80a1f0-0
langchain.docstore.arbitrary_fn.DocstoreFn¶ class langchain.docstore.arbitrary_fn.DocstoreFn(lookup_fn: Callable[[str], Union[Document, str]])[source]¶ Bases: Docstore Langchain Docstore via arbitrary lookup function. This is useful when: it’s expensive to construct an InMemoryDocstore/dict you retrieve documents from remote sources you just want to reuse existing objects Methods __init__(lookup_fn) search(search) Search for a document. search(search: str) → Document[source]¶ Search for a document. Parameters search – search string Returns Document if found, else error message.
https://api.python.langchain.com/en/latest/docstore/langchain.docstore.arbitrary_fn.DocstoreFn.html
7d556494d04a-0
langchain.text_splitter.SentenceTransformersTokenTextSplitter¶ class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]¶ Bases: TextSplitter Splitting text to tokens using sentence model tokenizer. Create a new TextSplitter. Methods __init__([chunk_overlap, model_name, ...]) Create a new TextSplitter. atransform_documents(documents, **kwargs) Asynchronously transform a sequence of documents by splitting them. count_tokens(*, text) create_documents(texts[, metadatas]) Create documents from a list of texts. from_huggingface_tokenizer(tokenizer, **kwargs) Text splitter that uses HuggingFace tokenizer to count length. from_tiktoken_encoder([encoding_name, ...]) Text splitter that uses tiktoken encoder to count length. split_documents(documents) Split documents. split_text(text) Split text into multiple components. transform_documents(documents, **kwargs) Transform sequence of documents by splitting them. async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶ Asynchronously transform a sequence of documents by splitting them. count_tokens(*, text: str) → int[source]¶ create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶ Create documents from a list of texts. classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶ Text splitter that uses HuggingFace tokenizer to count length.
https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html