id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
61
160
2c3768fcfd7b-3
Top Level call generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.llamaapi.ChatLlamaAPI.html
2c3768fcfd7b-4
Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.llamaapi.ChatLlamaAPI.html
2c3768fcfd7b-5
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[BaseMessageChunk]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.llamaapi.ChatLlamaAPI.html
11932595c13b-0
langchain.llms.openai.completion_with_retry¶ langchain.llms.openai.completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.completion_with_retry.html
e306aa7c4336-0
langchain.llms.forefrontai.ForefrontAI¶ class langchain.llms.forefrontai.ForefrontAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, endpoint_url: str = '', temperature: float = 0.7, length: int = 256, top_p: float = 1.0, top_k: int = 40, repetition_penalty: int = 1, forefrontai_api_key: Optional[str] = None, base_url: Optional[str] = None)[source]¶ Bases: LLM ForefrontAI large language models. To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key. Example from langchain.llms import ForefrontAI forefrontai = ForefrontAI(endpoint_url="") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_url: Optional[str] = None¶ Base url to use, if None decides based on model name. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param endpoint_url: str = ''¶ Model name to use. param forefrontai_api_key: Optional[str] = None¶ param length: int = 256¶ The maximum number of tokens to generate in the completion. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param repetition_penalty: int = 1¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-1
Metadata to add to the run trace. param repetition_penalty: int = 1¶ Penalizes repeated tokens according to frequency. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param top_k: int = 40¶ The number of highest probability vocabulary tokens to keep for top-k-filtering. param top_p: float = 1.0¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-4
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-5
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-6
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key exists in environment. property lc_attributes: Dict¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
e306aa7c4336-7
Validate that api key exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶ Examples using ForefrontAI¶ ForefrontAI
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
18a08b459beb-0
langchain.llms.sagemaker_endpoint.ContentHandlerBase¶ class langchain.llms.sagemaker_endpoint.ContentHandlerBase[source]¶ Bases: Generic[INPUT_TYPE, OUTPUT_TYPE] A handler class to transform input from LLM to a format that SageMaker endpoint expects. Similarly, the class handles transforming output from the SageMaker endpoint to a format that LLM class expects. Methods __init__() transform_input(prompt, model_kwargs) Transforms the input to a format that model can accept as the request Body. transform_output(output) Transforms the output from the model to string that the LLM class expects. Attributes accepts The MIME type of the response data returned from endpoint content_type The MIME type of the input data passed to endpoint abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes[source]¶ Transforms the input to a format that model can accept as the request Body. Should return bytes or seekable file like object in the format specified in the content_type request header. abstract transform_output(output: bytes) → OUTPUT_TYPE[source]¶ Transforms the output from the model to string that the LLM class expects. accepts: Optional[str] = 'text/plain'¶ The MIME type of the response data returned from endpoint content_type: Optional[str] = 'text/plain'¶ The MIME type of the input data passed to endpoint Examples using ContentHandlerBase¶ SageMaker Endpoint
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.ContentHandlerBase.html
beb4bcdd98ac-0
langchain.llms.tongyi.stream_generate_with_retry¶ langchain.llms.tongyi.stream_generate_with_retry(llm: Tongyi, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.stream_generate_with_retry.html
f6f6051ce706-0
langchain_experimental.llms.rellm_decoder.RELLM¶ class langchain_experimental.llms.rellm_decoder.RELLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline: Any = None, model_id: str = 'gpt2', model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, regex: RegexPattern, max_new_tokens: int = 200)[source]¶ Bases: HuggingFacePipeline RELLM wrapped LLM using HuggingFace Pipeline API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param max_new_tokens: int = 200¶ Maximum number of new tokens to generate. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_id: str = 'gpt2'¶ Model name to use. param model_kwargs: Optional[dict] = None¶ Key word arguments passed to the model. param pipeline_kwargs: Optional[dict] = None¶ Key word arguments passed to the pipeline. param regex: RegexPattern [Required]¶ The structured format to complete. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator check_rellm_installation  »  all fields[source]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → LLM¶ Construct the pipeline object from model_id and task.
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-4
Construct the pipeline object from model_id and task. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-5
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-6
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
f6f6051ce706-7
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
62d32f673db8-0
langchain.llms.openai.OpenAIChat¶ class langchain.llms.openai.OpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all')[source]¶ Bases: BaseLLM OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAIChat openaichat = OpenAIChat(model_name="gpt-3.5-turbo") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param cache: Optional[bool] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-1
Set of special tokens that are allowed。 param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param max_retries: int = 6¶ Maximum number of retries to make when generating. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-3.5-turbo'¶ Model name to use. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param prefix_messages: List [Optional]¶ Series of messages for Chat input. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-5
This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int][source]¶ Get the token IDs using the tiktoken package.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-6
Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example:
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
62d32f673db8-7
Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using OpenAIChat¶ Activeloop’s Deep Lake
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
2d9b8747aeeb-0
langchain.llms.koboldai.clean_url¶ langchain.llms.koboldai.clean_url(url: str) → str[source]¶ Remove trailing slash and /api from url if present.
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.clean_url.html
d71bc231d861-0
langchain.llms.loading.load_llm_from_config¶ langchain.llms.loading.load_llm_from_config(config: dict) → BaseLLM[source]¶ Load LLM from Config Dict.
https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm_from_config.html
e9ec4766bc30-0
langchain.llms.openai.update_token_usage¶ langchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶ Update token usage.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html
1de24894d904-0
langchain.llms.cohere.acompletion_with_retry¶ langchain.llms.cohere.acompletion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.acompletion_with_retry.html
7acf5d467151-0
langchain.llms.openllm.OpenLLM¶ class langchain.llms.openllm.OpenLLM(model_name: Optional[str] = None, *, model_id: Optional[str] = None, server_url: Optional[str] = None, server_type: Literal['grpc', 'http'] = 'http', embedded: bool = True, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_kwargs: Dict[str, Any])[source]¶ Bases: LLM OpenLLM, supporting both in-process model instance and remote OpenLLM servers. To use, you should have the openllm library installed: pip install openllm Learn more at: https://github.com/bentoml/openllm Example running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM llm = OpenLLM( model_name='flan-t5', model_id='google/flan-t5-large', ) llm("What is the difference between a duck and a goose?") For all available supported models, you can run ‘openllm models’. If you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM llm = OpenLLM(server_url='http://localhost:3000') llm("What is the difference between a duck and a goose?") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-1
param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param embedded: bool = True¶ Initialize this LLM instance in current process by default. Should only set to False when using in conjunction with BentoML Service. param llm_kwargs: Dict[str, Any] [Required]¶ Key word arguments to be passed to openllm.LLM param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_id: Optional[str] = None¶ Model Id to use. If not provided, will use the default model for the model name. See ‘openllm models’ for all available model variants. param model_name: Optional[str] = None¶ Model name to use. See ‘openllm models’ for all available models. param server_type: ServerType = 'http'¶ Optional server type. Either ‘http’ or ‘grpc’. param server_url: Optional[str] = None¶ Optional server URL that currently runs a LLMServer with ‘openllm start’. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls,
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-5
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-6
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-7
Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property runner: openllm.LLMRunner¶ Get the underlying openllm.LLMRunner instance for integration with BentoML. Example: .. code-block:: python llm = OpenLLM(model_name=’flan-t5’, model_id=’google/flan-t5-large’, embedded=False, ) tools = load_tools([“serpapi”, “llm-math”], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION ) svc = bentoml.Service(“langchain-openllm”, runners=[llm.runner])
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
7acf5d467151-8
@svc.api(input=Text(), output=Text()) def chat(input_text: str): return agent.run(input_text) model Config[source]¶ Bases: object extra = 'forbid'¶ Examples using OpenLLM¶ OpenLLM
https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.OpenLLM.html
16201f47e581-0
langchain.llms.aviary.get_models¶ langchain.llms.aviary.get_models() → List[str][source]¶ List available models
https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html
4b146c75da39-0
langchain.llms.clarifai.Clarifai¶ class langchain.llms.clarifai.Clarifai(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, stub: Any = None, userDataObject: Any = None, model_id: Optional[str] = None, model_version_id: Optional[str] = None, app_id: Optional[str] = None, user_id: Optional[str] = None, pat: Optional[str] = None, api_base: str = 'https://api.clarifai.com')[source]¶ Bases: LLM Clarifai large language models. To use, you should have an account on the Clarifai platform, the clarifai python package installed, and the environment variable CLARIFAI_PAT set with your PAT key, or pass it as a named parameter to the constructor. Example from langchain.llms import Clarifai clarifai_llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_base: str = 'https://api.clarifai.com'¶ param app_id: Optional[str] = None¶ Clarifai application id to use. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-1
param callbacks: Callbacks = None¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_id: Optional[str] = None¶ Model id to use. param model_version_id: Optional[str] = None¶ Model version id to use. param pat: Optional[str] = None¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param userDataObject: Any = None¶ param user_id: Optional[str] = None¶ Clarifai user id to use. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-4
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-5
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-6
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that we have all required info to access Clarifai
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
4b146c75da39-7
Validate that we have all required info to access Clarifai platform and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶ Examples using Clarifai¶ Clarifai
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
b81bb448b833-0
langchain.llms.base.create_base_retry_decorator¶ langchain.llms.base.create_base_retry_decorator(error_types: List[Type[BaseException]], max_retries: int = 1, run_manager: Optional[Union[AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun]] = None) → Callable[[Any], Any][source]¶ Create a retry decorator for a given LLM and provided list of error types.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.create_base_retry_decorator.html
c3362460b513-0
langchain.llms.textgen.TextGen¶ class langchain.llms.textgen.TextGen(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, model_url: str, preset: Optional[str] = None, max_new_tokens: Optional[int] = 250, do_sample: bool = True, temperature: Optional[float] = 1.3, top_p: Optional[float] = 0.1, typical_p: Optional[float] = 1, epsilon_cutoff: Optional[float] = 0, eta_cutoff: Optional[float] = 0, repetition_penalty: Optional[float] = 1.18, top_k: Optional[float] = 40, min_length: Optional[int] = 0, no_repeat_ngram_size: Optional[int] = 0, num_beams: Optional[int] = 1, penalty_alpha: Optional[float] = 0, length_penalty: Optional[float] = 1, early_stopping: bool = False, seed: int = - 1, add_bos_token: bool = True, truncation_length: Optional[int] = 2048, ban_eos_token: bool = False, skip_special_tokens: bool = True, stopping_strings: Optional[List[str]] = [], streaming: bool = False)[source]¶ Bases: LLM text-generation-webui models. To use, you should have the text-generation-webui installed, a model loaded, and –api added as a command-line option. Suggested installation, use one-click installer for your OS: https://github.com/oobabooga/text-generation-webui#one-click-installers
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-1
https://github.com/oobabooga/text-generation-webui#one-click-installers Parameters below taken from text-generation-webui api example: https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py Example from langchain.llms import TextGen llm = TextGen(model_url="http://localhost:8500") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param add_bos_token: bool = True¶ Add the bos_token to the beginning of prompts. Disabling this can make the replies more creative. param ban_eos_token: bool = False¶ Ban the eos_token. Forces the model to never end the generation prematurely. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param do_sample: bool = True¶ Do sample param early_stopping: bool = False¶ Early stopping param epsilon_cutoff: Optional[float] = 0¶ Epsilon cutoff param eta_cutoff: Optional[float] = 0¶ ETA cutoff param length_penalty: Optional[float] = 1¶ Length Penalty param max_new_tokens: Optional[int] = 250¶ The maximum number of tokens to generate. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param min_length: Optional[int] = 0¶ Minimum generation length in tokens. param model_url: str [Required]¶ The full URL to the textgen webui including http[s]://host:port param no_repeat_ngram_size: Optional[int] = 0¶ If not set to 0, specifies the length of token sets that are completely blocked
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-2
If not set to 0, specifies the length of token sets that are completely blocked from repeating at all. Higher values = blocks larger phrases, lower values = blocks words or letters from repeating. Only 0 or high values are a good idea in most cases. param num_beams: Optional[int] = 1¶ Number of beams param penalty_alpha: Optional[float] = 0¶ Penalty Alpha param preset: Optional[str] = None¶ The preset to use in the textgen webui param repetition_penalty: Optional[float] = 1.18¶ Exponential penalty factor for repeating prior tokens. 1 means no penalty, higher value = less repetition, lower value = more repetition. param seed: int = -1¶ Seed (-1 for random) param skip_special_tokens: bool = True¶ Skip special tokens. Some specific models need this unset. param stopping_strings: Optional[List[str]] = []¶ A list of strings to stop generation when encountered. param streaming: bool = False¶ Whether to stream the results, token by token (currently unimplemented). param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = 1.3¶ Primary factor to control randomness of outputs. 0 = deterministic (only the most likely token is used). Higher value = more randomness. param top_k: Optional[float] = 40¶ Similar to top_p, but select instead only the top_k most likely tokens. Higher value = higher range of possible random results. param top_p: Optional[float] = 0.1¶ If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results. param truncation_length: Optional[int] = 2048¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-3
param truncation_length: Optional[int] = 2048¶ Truncate the prompt up to this length. The leftmost tokens are removed if the prompt exceeds this length. Most models require this to be at most 2048. param typical_p: Optional[float] = 1¶ If not set to 1, select only tokens that are at least this much more likely to appear than random tokens, given the prior text. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-4
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-5
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-6
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-7
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-8
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
c3362460b513-9
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ Examples using TextGen¶ TextGen
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
5e63ffe26864-0
langchain.llms.openai.AzureOpenAI¶ class langchain.llms.openai.AzureOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, deployment_name: str = '', openai_api_type: str = '', openai_api_version: str = '')[source]¶ Bases: BaseOpenAI Azure-specific OpenAI large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-1
environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import AzureOpenAI openai = AzureOpenAI(model_name="text-davinci-003") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param batch_size: int = 20¶ Batch size to use when passing multiple documents to generate. param best_of: int = 1¶ Generates best_of completions server-side and returns the “best”. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param deployment_name: str = ''¶ Deployment name to use. param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_retries: int = 6¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-2
Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'text-davinci-003' (alias 'model')¶ Model name to use. param n: int = 1¶ How many completions to generate for each prompt. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_api_type: str = ''¶ param openai_api_version: str = ''¶ param openai_organization: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param presence_penalty: float = 0¶ Penalizes repeated tokens. param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ Timeout for requests to OpenAI completion API. Default is 600 seconds. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-3
when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-4
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-5
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields¶ Build extra kwargs from additional params that were passed in. create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-6
Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-7
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶ Get the sub prompts for llm call. get_token_ids(text: str) → List[int]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ max_tokens_for_prompt(prompt: str) → int¶ Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname: str) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-8
static modelname_to_contextsize(modelname: str) → int¶ Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
5e63ffe26864-9
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_azure_settings  »  all fields[source]¶ validator validate_environment  »  all fields¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property max_context_size: int¶ Get max context size for this model. model Config¶ Bases: object Configuration for this pydantic object. allow_population_by_field_name = True¶ Examples using AzureOpenAI¶ Azure OpenAI OpenAI
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.AzureOpenAI.html
7fcf0d773157-0
langchain.llms.ai21.AI21PenaltyData¶ class langchain.llms.ai21.AI21PenaltyData(*, scale: int = 0, applyToWhitespaces: bool = True, applyToPunctuations: bool = True, applyToNumbers: bool = True, applyToStopwords: bool = True, applyToEmojis: bool = True)[source]¶ Bases: BaseModel Parameters for AI21 penalty data. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param applyToEmojis: bool = True¶ param applyToNumbers: bool = True¶ param applyToPunctuations: bool = True¶ param applyToStopwords: bool = True¶ param applyToWhitespaces: bool = True¶ param scale: int = 0¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
142fba6162c1-0
langchain.llms.cohere.Cohere¶ class langchain.llms.cohere.Cohere(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, async_client: Any = None, model: Optional[str] = None, max_tokens: int = 256, temperature: float = 0.75, k: int = 0, p: int = 1, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, truncate: Optional[str] = None, max_retries: int = 10, cohere_api_key: Optional[str] = None, stop: Optional[List[str]] = None)[source]¶ Bases: LLM Cohere large language models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain.llms import Cohere cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param cohere_api_key: Optional[str] = None¶ param frequency_penalty: float = 0.0¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-1
param frequency_penalty: float = 0.0¶ Penalizes repeated tokens according to frequency. Between 0 and 1. param k: int = 0¶ Number of most likely tokens to consider at each step. param max_retries: int = 10¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶ Denotes the number of tokens to predict per generation. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: Optional[str] = None¶ Model name to use. param p: int = 1¶ Total probability mass of tokens to consider at each step. param presence_penalty: float = 0.0¶ Penalizes repeated tokens. Between 0 and 1. param stop: Optional[List[str]] = None¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.75¶ A non-negative float that tunes the degree of randomness in generation. param truncate: Optional[str] = None¶ Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NONE param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls,
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-5
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-6
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
142fba6162c1-7
Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶ Examples using Cohere¶ Cohere
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
6262b5a4f6e3-0
langchain.llms.pipelineai.PipelineAI¶ class langchain.llms.pipelineai.PipelineAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, pipeline_key: str = '', pipeline_kwargs: Dict[str, Any] = None, pipeline_api_key: Optional[str] = None)[source]¶ Bases: LLM, BaseModel PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain import PipelineAI pipeline = PipelineAI(pipeline_key="") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param pipeline_api_key: Optional[str] = None¶ param pipeline_key: str = ''¶ The id or tag of the target pipeline param pipeline_kwargs: Dict[str, Any] [Optional]¶ Holds any pipeline parameters valid for create call not explicitly specified. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-4
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-5
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-6
Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
6262b5a4f6e3-7
eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶ Examples using PipelineAI¶ PipelineAI
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
1bae36ec707d-0
langchain.llms.azureml_endpoint.DollyContentFormatter¶ class langchain.llms.azureml_endpoint.DollyContentFormatter[source]¶ Bases: ContentFormatterBase Content handler for the Dolly-v2-12b model Methods __init__() escape_special_characters(prompt) Escapes any special characters in prompt format_request_payload(prompt, model_kwargs) Formats the request body according to the input schema of the model. format_response_payload(output) Formats the response body according to the output schema of the model. Attributes accepts The MIME type of the response data returned from the endpoint content_type The MIME type of the input data passed to the endpoint static escape_special_characters(prompt: str) → str¶ Escapes any special characters in prompt format_request_payload(prompt: str, model_kwargs: Dict) → bytes[source]¶ Formats the request body according to the input schema of the model. Returns bytes or seekable file like object in the format specified in the content_type request header. format_response_payload(output: bytes) → str[source]¶ Formats the response body according to the output schema of the model. Returns the data type that is received from the response. accepts: Optional[str] = 'application/json'¶ The MIME type of the response data returned from the endpoint content_type: Optional[str] = 'application/json'¶ The MIME type of the input data passed to the endpoint Examples using DollyContentFormatter¶ AzureML Online Endpoint
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.DollyContentFormatter.html
b683eae30eca-0
langchain.llms.promptlayer_openai.PromptLayerOpenAI¶ class langchain.llms.promptlayer_openai.PromptLayerOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶ Bases: OpenAI PromptLayer OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-1
package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAI openai = PromptLayerOpenAI(model_name="text-davinci-003") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param batch_size: int = 20¶ Batch size to use when passing multiple documents to generate. param best_of: int = 1¶ Generates best_of completions server-side and returns the “best”. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param client: Any = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_retries: int = 6¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-2
Maximum number of retries to make when generating. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'text-davinci-003' (alias 'model')¶ Model name to use. param n: int = 1¶ How many completions to generate for each prompt. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_organization: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param pl_tags: Optional[List[str]] = None¶ param presence_penalty: float = 0¶ Penalizes repeated tokens. param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ Timeout for requests to OpenAI completion API. Default is 600 seconds. param return_pl_id: Optional[bool] = False¶ param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-3
them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-4
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-5
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ validator build_extra  »  all fields¶ Build extra kwargs from additional params that were passed in. create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-6
Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-7
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶ Get the sub prompts for llm call. get_token_ids(text: str) → List[int]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ max_tokens_for_prompt(prompt: str) → int¶ Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname: str) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b683eae30eca-8
static modelname_to_contextsize(modelname: str) → int¶ Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html