id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
f8a76bd808b1-66
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-67
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-68
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFaceHub[source]# Wrapper around HuggingFaceHub models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation, text2text-generation and summarization for now. Example from langchain.llms import HuggingFaceHub hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key") Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field repo_id: str = 'gpt2'# Model name to use. field task: Optional[str] = None# Task to call the model with. Should be a task that returns generated_text or summary_text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-69
Task to call the model with. Should be a task that returns generated_text or summary_text. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-70
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-71
get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFacePipeline[source]# Wrapper around HuggingFace Pipeline API. To use, you should have the transformers python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import HuggingFacePipeline
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-72
Example using from_model_id:from langchain.llms import HuggingFacePipeline hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10}, ) Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) Validators raise_deprecation Β» all fields set_verbose Β» verbose field model_id: str = 'gpt2'# Model name to use. field model_kwargs: Optional[dict] = None# Key word arguments passed to the model. field pipeline_kwargs: Optional[dict] = None# Key word arguments passed to the pipeline. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-73
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-74
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) β†’ langchain.llms.base.LLM[source]# Construct the pipeline object from model_id and task. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-75
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFaceTextGenInference[source]# HuggingFace text generation inference API. This class is a wrapper around the HuggingFace text generation inference API. It is used to generate text from a given prompt. Attributes: - max_new_tokens: The maximum number of tokens to generate.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-76
Attributes: - max_new_tokens: The maximum number of tokens to generate. - top_k: The number of top-k tokens to consider when generating text. - top_p: The cumulative probability threshold for generating text. - typical_p: The typical probability threshold for generating text. - temperature: The temperature to use when generating text. - repetition_penalty: The repetition penalty to use when generating text. - stop_sequences: A list of stop sequences to use when generating text. - seed: The seed to use when generating text. - inference_server_url: The URL of the inference server to use. - timeout: The timeout value in seconds to use while connecting to inference server. - client: The client object used to communicate with the inference server. Methods: - _call: Generates text based on a given prompt and stop sequences. - _llm_type: Returns the type of LLM. Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-77
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-78
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-79
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HumanInputLLM[source]# A LLM wrapper which returns user input as the response. Validators raise_deprecation Β» all fields set_verbose Β» verbose field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-80
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-81
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-82
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.LlamaCpp[source]# Wrapper around the llama.cpp model. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: abetlen/llama-cpp-python Example from langchain.llms import LlamaCppEmbeddings llm = LlamaCppEmbeddings(model_path="/path/to/llama/model") Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field echo: Optional[bool] = False# Whether to echo the prompt. field f16_kv: bool = True# Use half-precision for key/value cache. field last_n_tokens_size: Optional[int] = 64# The number of tokens to look back when applying the repeat_penalty. field logits_all: bool = False# Return logits for all tokens, not just the last token. field logprobs: Optional[int] = None#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-83
field logprobs: Optional[int] = None# The number of logprobs to return. If None, no logprobs are returned. field lora_base: Optional[str] = None# The path to the Llama LoRA base model. field lora_path: Optional[str] = None# The path to the Llama LoRA. If None, no LoRa is loaded. field max_tokens: Optional[int] = 256# The maximum number of tokens to generate. field model_path: str [Required]# The path to the Llama model file. field n_batch: Optional[int] = 8# Number of tokens to process in parallel. Should be a number between 1 and n_ctx. field n_ctx: int = 512# Token context window. field n_gpu_layers: Optional[int] = None# Number of layers to be loaded into gpu memory. Default None. field n_parts: int = -1# Number of parts to split the model into. If -1, the number of parts is automatically determined. field n_threads: Optional[int] = None# Number of threads to use. If None, the number of threads is automatically determined. field repeat_penalty: Optional[float] = 1.1# The penalty to apply to repeated tokens. field seed: int = -1# Seed. If -1, a random seed is used. field stop: Optional[List[str]] = []# A list of strings to stop generation when encountered. field streaming: bool = True# Whether to stream the results, token by token. field suffix: Optional[str] = None# A suffix to append to the generated text. If None, no suffix is appended. field temperature: Optional[float] = 0.8# The temperature to use for sampling.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-84
field temperature: Optional[float] = 0.8# The temperature to use for sampling. field top_k: Optional[int] = 40# The top-k value to use for sampling. field top_p: Optional[float] = 0.95# The top-p value to use for sampling. field use_mlock: bool = False# Force system to keep model in RAM. field use_mmap: Optional[bool] = True# Whether to keep the model loaded in RAM field verbose: bool [Optional]# Whether to print out response text. field vocab_only: bool = False# Only load the vocabulary, no weights. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-85
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-86
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-87
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) β†’ Generator[Dict, None, None][source]# Yields results objects as they are generated in real time. BETA: this is a beta feature while we figure out the right abstraction: Once that happens, this interface could change. It also calls the callback manager’s on_llm_new_token event with similar parameters to the OpenAI LLM class method of the same name. Args:prompt: The prompts to pass into the model. stop: Optional list of stop words to use when generating. Returns:A generator representing the stream of tokens being generated. Yields:A dictionary like objects containing a string token and metadata. See llama-cpp-python docs and below for more. Example:from langchain.llms import LlamaCpp llm = LlamaCpp( model_path="/path/to/local/model.bin", temperature = 0.5 ) for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'"," β€œ]):result = chunk[β€œchoices”][0] print(result[β€œtext”], end=’’, flush=True) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Modal[source]# Wrapper around Modal large language models. To use, you should have the modal-client python package installed.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-88
To use, you should have the modal-client python package installed. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose field endpoint_url: str = ''# model endpoint to use field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-89
Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-90
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-91
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.MosaicML[source]# Wrapper around MosaicML’s LLM inference service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicML endpoint_url = ( "https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict" ) mosaic_llm = MosaicML( endpoint_url=endpoint_url, mosaicml_api_token="my-api-key" ) Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'# Endpoint URL to use. field inject_instruction_format: bool = False# Whether to inject the instruction format into the prompt. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field retry_sleep: float = 1.0# How long to try sleeping for if a rate limit is encountered field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-92
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-93
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-94
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.NLPCloud[source]# Wrapper around NLPCloud large language models. To use, you should have the nlpcloud python package installed, and the environment variable NLPCLOUD_API_KEY set with your API key. Example from langchain.llms import NLPCloud nlpcloud = NLPCloud(model="gpt-neox-20b")
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-95
nlpcloud = NLPCloud(model="gpt-neox-20b") Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field bad_words: List[str] = []# List of tokens not allowed to be generated. field do_sample: bool = True# Whether to use sampling (True) or greedy decoding. field early_stopping: bool = False# Whether to stop beam search at num_beams sentences. field length_no_input: bool = True# Whether min_length and max_length should include the length of the input. field length_penalty: float = 1.0# Exponential penalty to the length. field max_length: int = 256# The maximum number of tokens to generate in the completion. field min_length: int = 1# The minimum number of tokens to generate in the completion. field model_name: str = 'finetuned-gpt-neox-20b'# Model name to use. field num_beams: int = 1# Number of beams for beam search. field num_return_sequences: int = 1# How many completions to generate for each prompt. field remove_end_sequence: bool = True# Whether or not to remove the end sequence token. field remove_input: bool = True# Remove input text from API response field repetition_penalty: float = 1.0# Penalizes repeated tokens. 1.0 means no penalty. field temperature: float = 0.7# What sampling temperature to use. field top_k: int = 50# The number of highest probability tokens to keep for top-k filtering. field top_p: int = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-96
Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-97
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-98
get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.OpenAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-99
Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAI openai = OpenAI(model_name="text-davinci-003") Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field batch_size: int = 20# Batch size to use when passing multiple documents to generate. field best_of: int = 1# Generates best_of completions server-side and returns the β€œbest”. field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_retries: int = 6# Maximum number of retries to make when generating. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003' (alias 'model')# Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-100
field presence_penalty: float = 0# Penalizes repeated tokens. field request_timeout: Optional[Union[float, Tuple[float, float]]] = None# Timeout for requests to OpenAI completion API. Default is 600 seconds. field streaming: bool = False# Whether to stream the results or not. field temperature: float = 0.7# What sampling temperature to use. field top_p: float = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-101
Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-102
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. get_token_ids(text: str) β†’ List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-103
Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-104
for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.OpenAIChat[source]# Wrapper around OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAIChat openaichat = OpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. field verbose: bool [Optional]# Whether to print out response text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-105
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-106
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int][source]#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-107
get_token_ids(text: str) β†’ List[int][source]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.OpenLM[source]# Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-108
Set of special tokens that are allowed。 field batch_size: int = 20# Batch size to use when passing multiple documents to generate. field best_of: int = 1# Generates best_of completions server-side and returns the β€œbest”. field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_retries: int = 6# Maximum number of retries to make when generating. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003' (alias 'model')# Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field request_timeout: Optional[Union[float, Tuple[float, float]]] = None# Timeout for requests to OpenAI completion API. Default is 600 seconds. field streaming: bool = False# Whether to stream the results or not. field temperature: float = 0.7# What sampling temperature to use. field top_p: float = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-109
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-110
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-111
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. get_token_ids(text: str) β†’ List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003")
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-112
Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Petals[source]# Wrapper around Petals Bloom models. To use, you should have the petals python package installed, and the environment variable HUGGINGFACE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-113
Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field client: Any = None# The client to use for the API calls. field do_sample: bool = True# Whether or not to use sampling; use greedy decoding otherwise. field max_length: Optional[int] = None# The maximum length of the sequence to be generated. field max_new_tokens: int = 256# The maximum number of new tokens to generate in the completion. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'bigscience/bloom-petals'# The model to use. field temperature: float = 0.7# What sampling temperature to use field tokenizer: Any = None# The tokenizer to use for the API calls. field top_k: Optional[int] = None# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 0.9# The cumulative probability for top-p sampling. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-114
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-115
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-116
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PipelineAI[source]# Wrapper around PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-117
in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field pipeline_key: str = ''# The id or tag of the target pipeline field pipeline_kwargs: Dict[str, Any] [Optional]# Holds any pipeline parameters valid for create call not explicitly specified. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-118
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-119
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PredictionGuard[source]# Wrapper around Prediction Guard large language models.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-120
Wrapper around Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass it as a named parameter to the constructor. .. rubric:: Example Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field max_tokens: int = 256# Denotes the number of tokens to predict per generation. field name: Optional[str] = 'default-text-gen'# Proxy name to use. field temperature: float = 0.75# A non-negative float that tunes the degree of randomness in generation. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-121
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-122
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-123
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PromptLayerOpenAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional :param pl_tags: List of strings to tag the request with. :param return_pl_id: If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAI openai = PromptLayerOpenAI(model_name="text-davinci-003") Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-124
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-125
deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. get_token_ids(text: str) β†’ List[int]# Get the token IDs using the tiktoken package.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-126
Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-127
Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PromptLayerOpenAIChat[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional :param pl_tags: List of strings to tag the request with. :param return_pl_id: If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAIChat
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-128
Generation object. Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-129
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-130
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-131
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.RWKV[source]# Wrapper around RWKV language models. To use, you should have the rwkv python package installed, the pre-trained model file, and the model’s config information. Example from langchain.llms import RWKV model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32") # Simplest invocation response = model("Once upon a time, ") Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field CHUNK_LEN: int = 256# Batch size for prompt processing. field max_tokens_per_generation: int = 256# Maximum number of tokens to generate. field model: str [Required]# Path to the pre-trained RWKV model file. field penalty_alpha_frequency: float = 0.4# Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-132
in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.. field penalty_alpha_presence: float = 0.4# Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.. field rwkv_verbose: bool = True# Print debug information. field strategy: str = 'cpu fp32'# Token context window. field temperature: float = 1.0# The temperature to use for sampling. field tokens_path: str [Required]# Path to the RWKV tokens file. field top_p: float = 0.5# The top-p value to use for sampling. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-133
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-134
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-135
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Replicate[source]# Wrapper around Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-136
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-137
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-138
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SagemakerEndpoint[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-139
The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. field model_kwargs: Optional[Dict] = None# Key word arguments to pass to the model. field region_name: str = ''# The aws region where the Sagemaker model is deployed, eg. us-west-2. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-140
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-141
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-142
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]# Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu )
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-143
hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Validators raise_deprecation Β» all fields set_verbose Β» verbose field device: int = 0# Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_id: str = 'gpt2'# Hugging Face model_id to load the model. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field model_load_fn: Callable = <function _load_transformer># Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'transformers', 'torch']# Requirements to install on hardware to inference the model. field task: str = 'text-generation'# Hugging Face task (β€œtext-generation”, β€œtext2text-generation” or
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-144
Hugging Face task (β€œtext-generation”, β€œtext2text-generation” or β€œsummarization”). field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-145
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-146
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-147
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedPipeline[source]# Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"],
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-148
hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators raise_deprecation Β» all fields set_verbose Β» verbose field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_load_fn: Callable [Required]# Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'torch']# Requirements to install on hardware to inference the model. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-149
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-150
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM[source]# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-151
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.StochasticAI[source]# Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Validators build_extra Β» all fields
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-152
stochasticai = StochasticAI(api_url="") Validators build_extra Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field api_url: str = ''# Model name to use. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-153
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-154
Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.VertexAI[source]# Wrapper around Google Vertex AI large language models. Validators raise_deprecation Β» all fields
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-155
Wrapper around Google Vertex AI large language models. Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field credentials: Any = None# The default custom credentials (google.auth.credentials.Credentials) to use field location: str = 'us-central1'# The default location to use when making API calls. field max_output_tokens: int = 128# Token limit determines the maximum amount of text output from one prompt. field project: Optional[str] = None# The default GCP project to use when making Vertex API calls. field temperature: float = 0.0# Sampling temperature, it controls the degree of randomness in token selection. field top_k: int = 40# How the model selects tokens for output, the next token is selected from field top_p: float = 0.95# Tokens are selected from most probable to least until the sum of their field tuned_model_name: Optional[str] = None# The name of a tuned model, if it’s provided, model_name is ignored. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-156
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict#
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-157
Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-158
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_environment Β» all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field best_of: Optional[int] = None# Generates this many completions server-side and returns the β€œbest”. field logprobs: bool = False# Whether to return log probabilities. field max_tokens: Optional[int] = None# Maximum number of tokens to generate. field min_tokens: Optional[int] = None# Minimum number of tokens to generate. field model_id: str = 'palmyra-instruct'# Model name to use.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-159
field model_id: str = 'palmyra-instruct'# Model name to use. field n: Optional[int] = None# How many completions to generate. field presence_penalty: Optional[float] = None# Penalizes repeated tokens regardless of frequency. field repetition_penalty: Optional[float] = None# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None# Sequences when completion generation will stop. field temperature: Optional[float] = None# What sampling temperature to use. field top_p: Optional[float] = None# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. field writer_api_key: Optional[str] = None# Writer API key. field writer_org_id: Optional[str] = None# Writer organization ID. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-160
Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-161
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_token_ids(text: str) β†’ List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None) β†’ str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β†’ langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
f8a76bd808b1-162
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. previous Writer next Chat Models By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/reference/modules/llms.html
10ba8875dd49-0
.rst .pdf Embeddings Embeddings# Wrappers around embedding modules. pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]# Wrapper for Aleph Alpha’s Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding embeddings = AlephAlphaSymmetricSemanticEmbedding() document = "This is a content of the document" query = "What is the content of the document?" doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) field aleph_alpha_api_key: Optional[str] = None# API key for Aleph Alpha API. field compress_to_size: Optional[int] = 128# Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. field contextual_control_threshold: Optional[int] = None# Attention control parameters only apply to those tokens that have explicitly been set in the request. field control_log_additive: Optional[bool] = True# Apply controls on prompt items by adding the log(control_factor) to attention scores. field hosting: Optional[str] = 'https://api.aleph-alpha.com'# Optional parameter that specifies which datacenters may process the request. field model: Optional[str] = 'luminous-base'# Model name to use. field normalize: Optional[bool] = True# Should returned embeddings be normalized embed_documents(texts: List[str]) β†’ List[List[float]][source]#
https://python.langchain.com/en/latest/reference/modules/embeddings.html
10ba8875dd49-1
embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]# The symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call out to Aleph Alpha’s Document endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.CohereEmbeddings[source]# Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings( model="embed-english-light-v2.0", cohere_api_key="my-api-key" ) field model: str = 'embed-english-v2.0'# Model name to use. field truncate: Optional[str] = None#
https://python.langchain.com/en/latest/reference/modules/embeddings.html
10ba8875dd49-2
Model name to use. field truncate: Optional[str] = None# Truncate embeddings that are too long from start or end (β€œNONE”|”START”|”END”) embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call out to Cohere’s embedding endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to Cohere’s embedding endpoint. Parameters text – The text to embed. Returns Embeddings for the text. class langchain.embeddings.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]# Wrapper around Elasticsearch embedding models. This class provides an interface to generate embeddings using a model deployed in an Elasticsearch cluster. It requires an Elasticsearch connection object and the model_id of the model deployed in the cluster. In Elasticsearch you need to have an embedding model loaded and deployed. - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html embed_documents(texts: List[str]) β†’ List[List[float]][source]# Generate embeddings for a list of documents. Parameters texts (List[str]) – A list of document text strings to generate embeddings for. Returns A list of embeddings, one for each document in the inputlist. Return type List[List[float]] embed_query(text: str) β†’ List[float][source]# Generate an embedding for a single query text. Parameters text (str) – The query text to generate an embedding for. Returns The embedding for the input query text. Return type List[float]
https://python.langchain.com/en/latest/reference/modules/embeddings.html