id stringlengths 14 16 | source stringlengths 49 117 | text stringlengths 16 2.73k |
|---|---|---|
1ae35bf65e45-49 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-50 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.FakeListLLM[source]#
Fake LLM wrapper for testing purposes.
Validators
raise_deprecation Β» all f... |
1ae35bf65e45-51 | https://python.langchain.com/en/latest/reference/modules/llms.html | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... |
1ae35bf65e45-52 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, Map... |
1ae35bf65e45-53 | https://python.langchain.com/en/latest/reference/modules/llms.html | set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_ur... |
1ae35bf65e45-54 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-55 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.Prom... |
1ae35bf65e45-56 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-57 | https://python.langchain.com/en/latest/reference/modules/llms.html | field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Opti... |
1ae35bf65e45-58 | https://python.langchain.com/en/latest/reference/modules/llms.html | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... |
1ae35bf65e45-59 | https://python.langchain.com/en/latest/reference/modules/llms.html | exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kw... |
1ae35bf65e45-60 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-61 | https://python.langchain.com/en/latest/reference/modules/llms.html | field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: float = 0.7#
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
field top_k: Optional[int] = None#
Decode using t... |
1ae35bf65e45-62 | https://python.langchain.com/en/latest/reference/modules/llms.html | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... |
1ae35bf65e45-63 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-64 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai... |
1ae35bf65e45-65 | https://python.langchain.com/en/latest/reference/modules/llms.html | What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], la... |
1ae35bf65e45-66 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fie... |
1ae35bf65e45-67 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-68 | https://python.langchain.com/en/latest/reference/modules/llms.html | from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environ... |
1ae35bf65e45-69 | https://python.langchain.com/en/latest/reference/modules/llms.html | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ a... |
1ae35bf65e45-70 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-71 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceHub[source]#
Wrapper around HuggingFaceHub models.
To use, you should have the huggi... |
1ae35bf65e45-72 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langcha... |
1ae35bf65e45-73 | https://python.langchain.com/en/latest/reference/modules/llms.html | update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional... |
1ae35bf65e45-74 | https://python.langchain.com/en/latest/reference/modules/llms.html | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[... |
1ae35bf65e45-75 | https://python.langchain.com/en/latest/reference/modules/llms.html | hf = HuggingFacePipeline(pipeline=pipe)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field model_id: str = 'gpt2'#
Model name to use.
field model_kwargs: Optional[dict] = None#
Key word arguments passed to the model.
field pipeline_kwargs: Optional[dict] = None#
Key word arguments passed to the pipel... |
1ae35bf65e45-76 | https://python.langchain.com/en/latest/reference/modules/llms.html | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... |
1ae35bf65e45-77 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-78 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceTextGenInference[source]#
HuggingFace text generation inference API.
This class is a ... |
1ae35bf65e45-79 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langcha... |
1ae35bf65e45-80 | https://python.langchain.com/en/latest/reference/modules/llms.html | update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional... |
1ae35bf65e45-81 | https://python.langchain.com/en/latest/reference/modules/llms.html | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[... |
1ae35bf65e45-82 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-83 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.Prom... |
1ae35bf65e45-84 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-85 | https://python.langchain.com/en/latest/reference/modules/llms.html | field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
S... |
1ae35bf65e45-86 | https://python.langchain.com/en/latest/reference/modules/llms.html | Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded in RAM
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: O... |
1ae35bf65e45-87 | https://python.langchain.com/en/latest/reference/modules/llms.html | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally... |
1ae35bf65e45-88 | https://python.langchain.com/en/latest/reference/modules/llms.html | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: ... |
1ae35bf65e45-89 | https://python.langchain.com/en/latest/reference/modules/llms.html | Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.... |
1ae35bf65e45-90 | https://python.langchain.com/en/latest/reference/modules/llms.html | __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = N... |
1ae35bf65e45-91 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fie... |
1ae35bf65e45-92 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-93 | https://python.langchain.com/en/latest/reference/modules/llms.html | "https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict"
)
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = 'https://models.hosted-on.mosai... |
1ae35bf65e45-94 | https://python.langchain.com/en/latest/reference/modules/llms.html | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from m... |
1ae35bf65e45-95 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-96 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.NLPCloud[source]#
Wrapper around NLPCloud large language models.
To use, you should have the nlp... |
1ae35bf65e45-97 | https://python.langchain.com/en/latest/reference/modules/llms.html | field remove_input: bool = True#
Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
fiel... |
1ae35bf65e45-98 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values... |
1ae35bf65e45-99 | https://python.langchain.com/en/latest/reference/modules/llms.html | get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIn... |
1ae35bf65e45-100 | https://python.langchain.com/en/latest/reference/modules/llms.html | To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(mod... |
1ae35bf65e45-101 | https://python.langchain.com/en/latest/reference/modules/llms.html | field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampli... |
1ae35bf65e45-102 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values... |
1ae35bf65e45-103 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-104 | https://python.langchain.com/en/latest/reference/modules/llms.html | max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.mod... |
1ae35bf65e45-105 | https://python.langchain.com/en/latest/reference/modules/llms.html | pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even ... |
1ae35bf65e45-106 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langcha... |
1ae35bf65e45-107 | https://python.langchain.com/en/latest/reference/modules/llms.html | update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional... |
1ae35bf65e45-108 | https://python.langchain.com/en/latest/reference/modules/llms.html | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[... |
1ae35bf65e45-109 | https://python.langchain.com/en/latest/reference/modules/llms.html | Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for crea... |
1ae35bf65e45-110 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-111 | https://python.langchain.com/en/latest/reference/modules/llms.html | create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[lang... |
1ae35bf65e45-112 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-113 | https://python.langchain.com/en/latest/reference/modules/llms.html | Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure ou... |
1ae35bf65e45-114 | https://python.langchain.com/en/latest/reference/modules/llms.html | The maximum number of new tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call
not explicitly specified.
field model_name: str = 'bigscience/bloom-petals'#
The model to use.
field temperature: float = 0.7#
What sampling temperature to use
... |
1ae35bf65e45-115 | https://python.langchain.com/en/latest/reference/modules/llms.html | async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from t... |
1ae35bf65e45-116 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-117 | https://python.langchain.com/en/latest/reference/modules/llms.html | .. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PipelineAI[source]#
Wrapper around PipelineAI large language models.
To use, you should have the... |
1ae35bf65e45-118 | https://python.langchain.com/en/latest/reference/modules/llms.html | async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult... |
1ae35bf65e45-119 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.Prom... |
1ae35bf65e45-120 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-121 | https://python.langchain.com/en/latest/reference/modules/llms.html | Your Prediction Guard access token.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run ... |
1ae35bf65e45-122 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fie... |
1ae35bf65e45-123 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-124 | https://python.langchain.com/en/latest/reference/modules/llms.html | be passed here. The PromptLayerOpenAI LLM adds two optional
Parameters
pl_tags β List of strings to tag the request with.
return_pl_id β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOp... |
1ae35bf65e45-125 | https://python.langchain.com/en/latest/reference/modules/llms.html | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values... |
1ae35bf65e45-126 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_n... |
1ae35bf65e45-127 | https://python.langchain.com/en/latest/reference/modules/llms.html | max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.mod... |
1ae35bf65e45-128 | https://python.langchain.com/en/latest/reference/modules/llms.html | pydantic model langchain.llms.PromptLayerOpenAIChat[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All pa... |
1ae35bf65e45-129 | https://python.langchain.com/en/latest/reference/modules/llms.html | __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = N... |
1ae35bf65e45-130 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fie... |
1ae35bf65e45-131 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-132 | https://python.langchain.com/en/latest/reference/modules/llms.html | response = model("Once upon a time, ")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field CHUNK_LEN: int = 256#
Batch size for prompt processing.
field max_tokens_per_generation: int = 256#
Maximum number of tokens to generate.
field model: str [Required]#
Path to th... |
1ae35bf65e45-133 | https://python.langchain.com/en/latest/reference/modules/llms.html | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a li... |
1ae35bf65e45-134 | https://python.langchain.com/en/latest/reference/modules/llms.html | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(pro... |
1ae35bf65e45-135 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-136 | https://python.langchain.com/en/latest/reference/modules/llms.html | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given pro... |
1ae35bf65e45-137 | https://python.langchain.com/en/latest/reference/modules/llms.html | exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kw... |
1ae35bf65e45-138 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-139 | https://python.langchain.com/en/latest/reference/modules/llms.html | If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Valid... |
1ae35bf65e45-140 | https://python.langchain.com/en/latest/reference/modules/llms.html | __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = N... |
1ae35bf65e45-141 | https://python.langchain.com/en/latest/reference/modules/llms.html | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fie... |
1ae35bf65e45-142 | https://python.langchain.com/en/latest/reference/modules/llms.html | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... |
1ae35bf65e45-143 | https://python.langchain.com/en/latest/reference/modules/llms.html | To use, you should have the runhouse python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHugg... |
1ae35bf65e45-144 | https://python.langchain.com/en/latest/reference/modules/llms.html | Hugging Face model_id to load the model.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field model_load_fn: Callable = <function _load_transformer>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'transformers', 'torch']#
Requirements to ins... |
1ae35bf65e45-145 | https://python.langchain.com/en/latest/reference/modules/llms.html | async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from t... |
1ae35bf65e45-146 | https://python.langchain.com/en/latest/reference/modules/llms.html | generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.Prom... |
1ae35bf65e45-147 | https://python.langchain.com/en/latest/reference/modules/llms.html | predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
l... |
1ae35bf65e45-148 | https://python.langchain.com/en/latest/reference/modules/llms.html | model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=m... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.