id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
45b46cec0462-229
Tags to add to the run trace. attribute temperature: float = 0.7 What sampling temperature to use attribute tokenizer: Any = None The tokenizer to use for the API calls. attribute top_k: Optional[int] = None The number of highest probability vocabulary tokens to keep for top-k-filtering. attribute top_p: float = 0.9 The cumulative probability for top-p sampling. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-230
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-231
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-232
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-233
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.PipelineAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_key='', pipeline_kwargs=None, pipeline_api_key=None)[source] Bases: langchain.llms.base.LLM, pydantic.main.BaseModel Wrapper around PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain import PipelineAI pipeline = PipelineAI(pipeline_key="") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_key (str) – pipeline_kwargs (Dict[str, Any]) – pipeline_api_key (Optional[str]) – Return type None attribute pipeline_key: str = '' The id or tag of the target pipeline
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-234
attribute pipeline_key: str = '' The id or tag of the target pipeline attribute pipeline_kwargs: Dict[str, Any] [Optional] Holds any pipeline parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-235
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-236
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-237
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-238
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.PredictionGuard(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='MPT-7B-Instruct', output=None, max_tokens=256, temperature=0.75, token=None, stop=None)[source] Bases: langchain.llms.base.LLM Wrapper around Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass it as a named parameter to the constructor. To use Prediction Guard’s API along with OpenAI models, set the environment variable OPENAI_API_KEY with your OpenAI API key as well. Example pgllm = PredictionGuard(model="MPT-7B-Instruct", token="my-access-token", output={ "type": "boolean" }) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-239
tags (Optional[List[str]]) – client (Any) – model (Optional[str]) – output (Optional[Dict[str, Any]]) – max_tokens (int) – temperature (float) – token (Optional[str]) – stop (Optional[List[str]]) – Return type None attribute max_tokens: int = 256 Denotes the number of tokens to predict per generation. attribute model: Optional[str] = 'MPT-7B-Instruct' Model name to use. attribute output: Optional[Dict[str, Any]] = None The output type or structure for controlling the LLM output. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.75 A non-negative float that tunes the degree of randomness in generation. attribute token: Optional[str] = None Your Prediction Guard access token. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-240
tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-241
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-242
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-243
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.PromptLayerOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-244
Bases: langchain.llms.openai.OpenAI Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional Parameters pl_tags (Optional[List[str]]) – List of strings to tag the request with. return_pl_id (Optional[bool]) – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model (str) – temperature (float) – max_tokens (int) – top_p (float) – frequency_penalty (float) – presence_penalty (float) – n (int) – best_of (int) – model_kwargs (Dict[str, Any]) – openai_api_key (Optional[str]) – openai_api_base (Optional[str]) – openai_organization (Optional[str]) – openai_proxy (Optional[str]) – batch_size (int) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – logit_bias (Optional[Dict[str, float]]) – max_retries (int) – streaming (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-245
max_retries (int) – streaming (bool) – allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], typing.Collection[str]]) – tiktoken_model_name (Optional[str]) – Return type None Example from langchain.llms import PromptLayerOpenAI openai = PromptLayerOpenAI(model_name="text-davinci-003") __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-246
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-247
self (Model) – Returns new model instance Return type Model create_llm_result(choices, prompts, token_usage) Create the LLMResult from the choices and prompts. Parameters choices (Any) – prompts (List[str]) – token_usage (Dict[str, int]) – Return type langchain.schema.LLMResult dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-248
Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_sub_prompts(params, prompts, stop=None) Get the sub prompts for llm call. Parameters params (Dict[str, Any]) – prompts (List[str]) – stop (Optional[List[str]]) – Return type List[List[str]] get_token_ids(text) Get the token IDs using the tiktoken package. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode max_tokens_for_prompt(prompt) Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt (str) – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Return type int Example max_tokens = openai.max_token_for_prompt("Tell me a joke.")
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-249
int Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname) Calculate the maximum number of tokens possible to generate for a model. Parameters modelname (str) – The modelname we want to know the context size for. Returns The maximum context size Return type int Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage prep_streaming_params(stop=None) Prepare the params for streaming. Parameters stop (Optional[List[str]]) – Return type Dict[str, Any] save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt, stop=None) Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt (str) – The prompts to pass into the model. stop (Optional[List[str]]) – Optional list of stop words to use when generating. Returns
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-250
stop (Optional[List[str]]) – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Return type Generator Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. property max_context_size: int Get max context size for this model. class langchain.llms.PromptLayerOpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all', pl_tags=None, return_pl_id=False)[source] Bases: langchain.llms.openai.OpenAIChat Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-251
To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional Parameters pl_tags (Optional[List[str]]) – List of strings to tag the request with. return_pl_id (Optional[bool]) – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – model_name (str) – model_kwargs (Dict[str, Any]) – openai_api_key (Optional[str]) – openai_api_base (Optional[str]) – openai_proxy (Optional[str]) – max_retries (int) – prefix_messages (List) – streaming (bool) – allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], typing.Collection[str]]) – Return type None Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {} Set of special tokens that are allowed。
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-252
Set of special tokens that are allowed。 attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all' Set of special tokens that are not allowed。 attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute model_name: str = 'gpt-3.5-turbo' Model name to use. attribute prefix_messages: List [Optional] Series of messages for Chat input. attribute streaming: bool = False Whether to stream the results or not. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-253
Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-254
the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token IDs using the tiktoken package. Parameters text (str) – Return type List[int]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-255
Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-256
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.RWKV(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, tokens_path, strategy='cpu fp32', rwkv_verbose=True, temperature=1.0, top_p=0.5, penalty_alpha_frequency=0.4, penalty_alpha_presence=0.4, CHUNK_LEN=256, max_tokens_per_generation=256, client=None, tokenizer=None, pipeline=None, model_tokens=None, model_state=None)[source] Bases: langchain.llms.base.LLM, pydantic.main.BaseModel Wrapper around RWKV language models. To use, you should have the rwkv python package installed, the pre-trained model file, and the model’s config information. Example from langchain.llms import RWKV model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32") # Simplest invocation
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-257
# Simplest invocation response = model("Once upon a time, ") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – tokens_path (str) – strategy (str) – rwkv_verbose (bool) – temperature (float) – top_p (float) – penalty_alpha_frequency (float) – penalty_alpha_presence (float) – CHUNK_LEN (int) – max_tokens_per_generation (int) – client (Any) – tokenizer (Any) – pipeline (Any) – model_tokens (Any) – model_state (Any) – Return type None attribute CHUNK_LEN: int = 256 Batch size for prompt processing. attribute max_tokens_per_generation: int = 256 Maximum number of tokens to generate. attribute model: str [Required] Path to the pre-trained RWKV model file. attribute penalty_alpha_frequency: float = 0.4 Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.. attribute penalty_alpha_presence: float = 0.4 Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.. attribute rwkv_verbose: bool = True Print debug information. attribute strategy: str = 'cpu fp32' Token context window.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-258
attribute strategy: str = 'cpu fp32' Token context window. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 1.0 The temperature to use for sampling. attribute tokens_path: str [Required] Path to the RWKV tokens file. attribute top_p: float = 0.5 The top-p value to use for sampling. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-259
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-260
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-261
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-262
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Replicate(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None, replicate_api_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example from langchain.llms import Replicate replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478", input={"image_dimensions": "512x512"}) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-263
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – input (Dict[str, Any]) – model_kwargs (Dict[str, Any]) – replicate_api_token (Optional[str]) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-264
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-265
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-266
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-267
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SagemakerEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source] Bases: langchain.llms.base.LLM Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-268
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – endpoint_name (str) – region_name (str) – credentials_profile_name (Optional[str]) – content_handler (langchain.llms.sagemaker_endpoint.LLMContentHandler) – model_kwargs (Optional[Dict]) – endpoint_kwargs (Optional[Dict]) – Return type None attribute content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required] The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute endpoint_kwargs: Optional[Dict] = None Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> attribute endpoint_name: str = '' The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: str = '' The aws region where the Sagemaker model is deployed, eg. us-west-2. attribute tags: Optional[List[str]] = None
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-269
attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-270
kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-271
Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-272
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-273
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SelfHostedHuggingFaceLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn=<function _load_transformer>, load_fn_kwargs=None, model_reqs=['./', 'transformers', 'torch'], model_id='gpt2', task='text-generation', device=0, model_kwargs=None)[source] Bases: langchain.llms.self_hosted.SelfHostedPipeline Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation",
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-274
model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – model_id (str) – task (str) – device (int) – model_kwargs (Optional[dict]) – Return type None attribute device: int = 0 Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. attribute hardware: Any = None Remote hardware to send the inference function to.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-275
attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _generate_text> Inference function to send to the remote hardware. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function. attribute model_id: str = 'gpt2' Hugging Face model_id to load the model. attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute model_load_fn: Callable = <function _load_transformer> Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'transformers', 'torch'] Requirements to install on hardware to inference the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute task: str = 'text-generation' Hugging Face task (β€œtext-generation”, β€œtext2text-generation” or β€œsummarization”). attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-276
Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-277
Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs) Init the SelfHostedPipeline from a pipeline object or string. Parameters pipeline (Any) – hardware (Any) – model_reqs (Optional[List[str]]) – device (int) – kwargs (Any) – Return type langchain.llms.base.LLM generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-278
kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-279
exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-280
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SelfHostedPipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'])[source] Bases: langchain.llms.base.LLM Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-281
import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – Return type None attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _generate_text> Inference function to send to the remote hardware. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-282
Key word arguments to pass to the model load function. attribute model_load_fn: Callable [Required] Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'torch'] Requirements to install on hardware to inference the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-283
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-284
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)[source] Init the SelfHostedPipeline from a pipeline object or string. Parameters pipeline (Any) – hardware (Any) – model_reqs (Optional[List[str]]) – device (int) – kwargs (Any) – Return type langchain.llms.base.LLM generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-285
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-286
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.StochasticAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url='', model_kwargs=None, stochasticai_api_key=None)[source] Bases: langchain.llms.base.LLM Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-287
Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – api_url (str) – model_kwargs (Dict[str, Any]) – stochasticai_api_key (Optional[str]) – Return type None attribute api_url: str = '' Model name to use. attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-288
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-289
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-290
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-291
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.VertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='text-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5, tuned_model_name=None)[source] Bases: langchain.llms.vertexai._VertexAICommon, langchain.llms.base.LLM Wrapper around Google Vertex AI large language models. Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-292
Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (_LanguageModel) – model_name (str) – temperature (float) – max_output_tokens (int) – top_p (float) – top_k (int) – stop (Optional[List[str]]) – project (Optional[str]) – location (str) – credentials (Any) – request_parallelism (int) – tuned_model_name (Optional[str]) – Return type None attribute credentials: Any = None The default custom credentials (google.auth.credentials.Credentials) to use attribute location: str = 'us-central1' The default location to use when making API calls. attribute max_output_tokens: int = 128 Token limit determines the maximum amount of text output from one prompt. attribute model_name: str = 'text-bison' The name of the Vertex AI large language model. attribute project: Optional[str] = None The default GCP project to use when making Vertex API calls. attribute request_parallelism: int = 5 The amount of parallelism allowed for requests issued to VertexAI models. attribute stop: Optional[List[str]] = None Optional list of stop words to use when generating. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.0 Sampling temperature, it controls the degree of randomness in token selection.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-293
Sampling temperature, it controls the degree of randomness in token selection. attribute top_k: int = 40 How the model selects tokens for output, the next token is selected from attribute top_p: float = 0.95 Tokens are selected from most probable to least until the sum of their attribute tuned_model_name: Optional[str] = None The name of a tuned model. If provided, model_name is ignored. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-294
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-295
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-296
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-297
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Writer(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, writer_org_id=None, model_id='palmyra-instruct', min_tokens=None, max_tokens=None, temperature=None, top_p=None, stop=None, presence_penalty=None, repetition_penalty=None, best_of=None, logprobs=False, n=None, writer_api_key=None, base_url=None)[source] Bases: langchain.llms.base.LLM Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – writer_org_id (Optional[str]) – model_id (str) – min_tokens (Optional[int]) – max_tokens (Optional[int]) – temperature (Optional[float]) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-298
max_tokens (Optional[int]) – temperature (Optional[float]) – top_p (Optional[float]) – stop (Optional[List[str]]) – presence_penalty (Optional[float]) – repetition_penalty (Optional[float]) – best_of (Optional[int]) – logprobs (bool) – n (Optional[int]) – writer_api_key (Optional[str]) – base_url (Optional[str]) – Return type None attribute base_url: Optional[str] = None Base url to use, if None decides based on model name. attribute best_of: Optional[int] = None Generates this many completions server-side and returns the β€œbest”. attribute logprobs: bool = False Whether to return log probabilities. attribute max_tokens: Optional[int] = None Maximum number of tokens to generate. attribute min_tokens: Optional[int] = None Minimum number of tokens to generate. attribute model_id: str = 'palmyra-instruct' Model name to use. attribute n: Optional[int] = None How many completions to generate. attribute presence_penalty: Optional[float] = None Penalizes repeated tokens regardless of frequency. attribute repetition_penalty: Optional[float] = None Penalizes repeated tokens according to frequency. attribute stop: Optional[List[str]] = None Sequences when completion generation will stop. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: Optional[float] = None What sampling temperature to use. attribute top_p: Optional[float] = None Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional]
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-299
attribute verbose: bool [Optional] Whether to print out response text. attribute writer_api_key: Optional[str] = None Writer API key. attribute writer_org_id: Optional[str] = None Writer organization ID. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-300
stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-301
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-302
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-303
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.OctoAIEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url=None, model_kwargs=None, octoai_api_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around OctoAI Inference Endpoints. OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints. To use, you should have the octoai python package installed, and the environment variable OCTOAI_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms.octoai_endpoint import OctoAIEndpoint OctoAIEndpoint( octoai_api_token="octoai-api-key", endpoint_url="https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate", model_kwargs={ "max_new_tokens": 200, "temperature": 0.75, "top_p": 0.95, "repetition_penalty": 1, "seed": None, "stop": [], }, ) Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-304
) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – endpoint_url (Optional[str]) – model_kwargs (Optional[dict]) – octoai_api_token (Optional[str]) – Return type None attribute endpoint_url: Optional[str] = None Endpoint URL to use. attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute octoai_api_token: Optional[str] = None OCTOAI API Token attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-305
tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-306
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-307
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/latest/modules/llms.html
45b46cec0462-308
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/modules/llms.html
06ea1ea7a2ba-0
Retrievers class langchain.retrievers.AmazonKendraRetriever(index_id, region_name=None, credentials_profile_name=None, top_k=3, attribute_filter=None, client=None)[source] Bases: langchain.schema.BaseRetriever Retriever class to query documents from Amazon Kendra Index. Parameters index_id (str) – Kendra index id region_name (Optional[str]) – The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config. credentials_profile_name (Optional[str]) – The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. top_k (int) – No of results to return attribute_filter (Optional[Dict]) – Additional filtering of results based on metadata See: https://docs.aws.amazon.com/kendra/latest/APIReference client (Optional[Any]) – boto3 client for Kendra Example retriever = AmazonKendraRetriever( index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03" ) get_relevant_documents(query)[source] Run search on Kendra index and get top k documents Example: .. code-block:: python docs = retriever.get_relevant_documents(β€˜This is my query’) Parameters query (str) – Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-1
Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ArxivRetriever(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: langchain.schema.BaseRetriever, langchain.utilities.arxiv.ArxivAPIWrapper It is effectively a wrapper for ArxivAPIWrapper. It wraps load() to get_relevant_documents(). It uses all ArxivAPIWrapper arguments without any change. Parameters arxiv_search (Any) – arxiv_exceptions (Any) – top_k_results (int) – load_max_docs (int) – load_all_available_meta (bool) – doc_content_chars_max (Optional[int]) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.AzureCognitiveSearchRetriever(*, service_name='', index_name='', api_key='', api_version='2020-06-30', aiosession=None, content_key='content')[source]
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-2
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Wrapper around Azure Cognitive Search. Parameters service_name (str) – index_name (str) – api_key (str) – api_version (str) – aiosession (Optional[aiohttp.client.ClientSession]) – content_key (str) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None ClientSession, in case we want to reuse connection for better performance. attribute api_key: str = '' API Key. Both Admin and Query keys work, but for reading data it’s recommended to use a Query key. attribute api_version: str = '2020-06-30' API version attribute content_key: str = 'content' Key in a retrieved result to set as the Document page_content. attribute index_name: str = '' Name of Index inside Azure Cognitive Search service attribute service_name: str = '' Name of Azure Cognitive Search service async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ChatGPTPluginRetriever(*, url, bearer_token, top_k=3, filter=None, aiosession=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-3
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters url (str) – bearer_token (str) – top_k (int) – filter (Optional[dict]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute bearer_token: str [Required] attribute filter: Optional[dict] = None attribute top_k: int = 3 attribute url: str [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ContextualCompressionRetriever(*, base_compressor, base_retriever)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever that wraps a base retriever and compresses the results. Parameters base_compressor (langchain.retrievers.document_compressors.base.BaseDocumentCompressor) – base_retriever (langchain.schema.BaseRetriever) – Return type None attribute base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required] Compressor for compressing retrieved documents. attribute base_retriever: langchain.schema.BaseRetriever [Required]
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-4
attribute base_retriever: langchain.schema.BaseRetriever [Required] Base Retriever to use for getting relevant documents. async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns Sequence of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.DataberryRetriever(datastore_url, top_k=None, api_key=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Databerry API. Parameters datastore_url (str) – top_k (Optional[int]) – api_key (Optional[str]) – datastore_url: str api_key: Optional[str] top_k: Optional[int] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ElasticSearchBM25Retriever(client, index_name)[source] Bases: langchain.schema.BaseRetriever Wrapper around Elasticsearch using BM25 as a retrieval method.
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-5
Wrapper around Elasticsearch using BM25 as a retrieval method. To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the β€œDeployments” page. To obtain your Elastic Cloud password for the default β€œelastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to β€œSecurity” > β€œUsers” Locate the β€œelastic” user and click β€œEdit” Click β€œReset password” Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Parameters client (Any) – index_name (str) – classmethod create(elasticsearch_url, index_name, k1=2.0, b=0.75)[source] Parameters elasticsearch_url (str) – index_name (str) – k1 (float) – b (float) – Return type langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever add_texts(texts, refresh_indices=True)[source] Run more texts through the embeddings and add to the retriever. Parameters texts (Iterable[str]) – Iterable of strings to add to the retriever. refresh_indices (bool) – bool to refresh ElasticSearch indices Returns
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-6
refresh_indices (bool) – bool to refresh ElasticSearch indices Returns List of ids from adding the texts into the retriever. Return type List[str] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.KNNRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel KNN Retriever. Parameters embeddings (langchain.embeddings.base.Embeddings) – index (Any) – texts (List[str]) – k (int) – relevancy_threshold (Optional[float]) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute k: int = 4 attribute relevancy_threshold: Optional[float] = None attribute texts: List[str] [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_texts(texts, embeddings, **kwargs)[source] Parameters texts (List[str]) –
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-7
Parameters texts (List[str]) – embeddings (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.retrievers.knn.KNNRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.LlamaIndexGraphRetriever(*, graph=None, query_configs=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Question-answering with sources over an LlamaIndex graph data structure. Parameters graph (Any) – query_configs (List[Dict]) – Return type None attribute graph: Any = None attribute query_configs: List[Dict] [Optional] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – Return type List[langchain.schema.Document] class langchain.retrievers.LlamaIndexRetriever(*, index=None, query_kwargs=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Question-answering with sources over an LlamaIndex data structure. Parameters index (Any) – query_kwargs (Dict) – Return type None attribute index: Any = None
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-8
Return type None attribute index: Any = None attribute query_kwargs: Dict [Optional] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – Return type List[langchain.schema.Document] class langchain.retrievers.MergerRetriever(retrievers)[source] Bases: langchain.schema.BaseRetriever This class merges the results of multiple retrievers. Parameters retrievers (List[langchain.schema.BaseRetriever]) – A list of retrievers to merge. get_relevant_documents(query)[source] Get the relevant documents for a given query. Parameters query (str) – The query to search for. Returns A list of relevant documents. Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Asynchronously get the relevant documents for a given query. Parameters query (str) – The query to search for. Returns A list of relevant documents. Return type List[langchain.schema.Document] merge_documents(query)[source] Merge the results of the retrievers. Parameters query (str) – The query to search for. Returns A list of merged documents. Return type List[langchain.schema.Document] async amerge_documents(query)[source] Asynchronously merge the results of the retrievers. Parameters query (str) – The query to search for. Returns A list of merged documents. Return type
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-9
Returns A list of merged documents. Return type List[langchain.schema.Document] class langchain.retrievers.MetalRetriever(client, params=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Metal API. Parameters client (Any) – params (Optional[dict]) – get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.MilvusRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Milvus API. Parameters embedding_function (langchain.embeddings.base.Embeddings) – collection_name (str) – connection_args (Optional[Dict[str, Any]]) – consistency_level (str) – search_params (Optional[dict]) – add_texts(texts, metadatas=None)[source] Add text to the Milvus store Parameters texts (List[str]) – The text metadatas (List[dict]) – Metadata dicts, must line up with existing store Return type None get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-10
Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.MultiQueryRetriever(retriever, llm_chain, verbose=True, parser_key='lines')[source] Bases: langchain.schema.BaseRetriever Given a user query, use an LLM to write a set of queries. Retrieve docs for each query. Rake the unique union of all retrieved docs. Parameters retriever (langchain.schema.BaseRetriever) – llm_chain (langchain.chains.llm.LLMChain) – verbose (bool) – parser_key (str) – Return type None classmethod from_llm(retriever, llm, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='You are an AI language model assistant. Your task is \nΒ Β Β  to generate 3 different versions of the given user \nΒ Β Β  question to retrieve relevant documents from a vectorΒ  database. \nΒ Β Β  By generating multiple perspectives on the user question, \nΒ Β Β  your goal is to help the user overcome some of the limitations \nΒ Β Β  of distance-based similarity search. Provide these alternative \nΒ Β Β  questions seperated by newlines. Original question: {question}', template_format='f-string', validate_template=True), parser_key='lines')[source] Initialize from llm using default template. Parameters retriever (langchain.schema.BaseRetriever) – retriever to query documents from
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-11
retriever (langchain.schema.BaseRetriever) – retriever to query documents from llm (langchain.llms.base.BaseLLM) – llm for query generation using DEFAULT_QUERY_PROMPT prompt (langchain.prompts.prompt.PromptTemplate) – parser_key (str) – Returns MultiQueryRetriever Return type langchain.retrievers.multi_query.MultiQueryRetriever get_relevant_documents(question)[source] Get relevated documents given a user query. Parameters question (str) – user query Returns Unique union of relevant documents from all generated queries Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] generate_queries(question)[source] Generate queries based upon user input. Parameters question (str) – user query Returns List of LLM generated queries that are similar to the user input Return type List[str] retrieve_documents(queries)[source] Run all LLM generated queries. Parameters queries (List[str]) – query list Returns List of retrived Documents Return type List[langchain.schema.Document] unique_union(documents)[source] Get uniqe Documents. Parameters documents (List[langchain.schema.Document]) – List of retrived Documents Returns List of unique retrived Documents Return type List[langchain.schema.Document] class langchain.retrievers.PineconeHybridSearchRetriever(*, embeddings, sparse_encoder=None, index=None, top_k=4, alpha=0.5)[source]
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-12
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters embeddings (langchain.embeddings.base.Embeddings) – sparse_encoder (Any) – index (Any) – top_k (int) – alpha (float) – Return type None attribute alpha: float = 0.5 attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute sparse_encoder: Any = None attribute top_k: int = 4 add_texts(texts, ids=None, metadatas=None)[source] Parameters texts (List[str]) – ids (Optional[List[str]]) – metadatas (Optional[List[dict]]) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-13
Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.PubMedRetriever(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: langchain.schema.BaseRetriever, langchain.utilities.pupmed.PubMedAPIWrapper It is effectively a wrapper for PubMedAPIWrapper. It wraps load() to get_relevant_documents(). It uses all PubMedAPIWrapper arguments without any change. Parameters top_k_results (int) – load_max_docs (int) – doc_content_chars_max (int) – load_all_available_meta (bool) – email (str) – base_url_esearch (str) – base_url_efetch (str) – max_retry (int) – sleep_time (float) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-14
Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.RemoteLangChainRetriever(*, url, headers=None, input_key='message', response_key='response', page_content_key='page_content', metadata_key='metadata')[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters url (str) – headers (Optional[dict]) – input_key (str) – response_key (str) – page_content_key (str) – metadata_key (str) – Return type None attribute headers: Optional[dict] = None attribute input_key: str = 'message' attribute metadata_key: str = 'metadata' attribute page_content_key: str = 'page_content' attribute response_key: str = 'response' attribute url: str [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.SVMRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel SVM Retriever. Parameters embeddings (langchain.embeddings.base.Embeddings) – index (Any) –
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-15
index (Any) – texts (List[str]) – k (int) – relevancy_threshold (Optional[float]) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute k: int = 4 attribute relevancy_threshold: Optional[float] = None attribute texts: List[str] [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_texts(texts, embeddings, **kwargs)[source] Parameters texts (List[str]) – embeddings (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.retrievers.svm.SVMRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.SelfQueryRetriever(*, vectorstore, llm_chain, search_type='similarity', search_kwargs=None, structured_query_translator, verbose=False, use_original_query=False)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever that wraps around a vector store and uses an LLM to generate the vector store queries. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – llm_chain (langchain.chains.llm.LLMChain) – search_type (str) –
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-16
search_type (str) – search_kwargs (dict) – structured_query_translator (langchain.chains.query_constructor.ir.Visitor) – verbose (bool) – use_original_query (bool) – Return type None attribute llm_chain: langchain.chains.llm.LLMChain [Required] The LLMChain for generating the vector store queries. attribute search_kwargs: dict [Optional] Keyword arguments to pass in to the vector store search. attribute search_type: str = 'similarity' The search type to perform on the vector store. attribute structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required] Translator for turning internal query language into vectorstore search params. attribute use_original_query: bool = False attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] The underlying vector store from which documents will be retrieved. attribute verbose: bool = False Use original query instead of the revised new query from LLM async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_llm(llm, vectorstore, document_contents, metadata_field_info, structured_query_translator=None, chain_kwargs=None, enable_limit=False, use_original_query=False, **kwargs)[source] Parameters llm (langchain.base_language.BaseLanguageModel) – vectorstore (langchain.vectorstores.base.VectorStore) – document_contents (str) – metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) –
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-17
metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) – structured_query_translator (Optional[langchain.chains.query_constructor.ir.Visitor]) – chain_kwargs (Optional[Dict]) – enable_limit (bool) – use_original_query (bool) – kwargs (Any) – Return type langchain.retrievers.self_query.base.SelfQueryRetriever get_relevant_documents(query, callbacks=None)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.TFIDFRetriever(*, vectorizer=None, docs, tfidf_array=None, k=4)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters vectorizer (Any) – docs (List[langchain.schema.Document]) – tfidf_array (Any) – k (int) – Return type None attribute docs: List[langchain.schema.Document] [Required] attribute k: int = 4 attribute tfidf_array: Any = None attribute vectorizer: Any = None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_documents(documents, *, tfidf_params=None, **kwargs)[source] Parameters documents (Iterable[langchain.schema.Document]) –
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-18
Parameters documents (Iterable[langchain.schema.Document]) – tfidf_params (Optional[Dict[str, Any]]) – kwargs (Any) – Return type langchain.retrievers.tfidf.TFIDFRetriever classmethod from_texts(texts, metadatas=None, tfidf_params=None, **kwargs)[source] Parameters texts (Iterable[str]) – metadatas (Optional[Iterable[dict]]) – tfidf_params (Optional[Dict[str, Any]]) – kwargs (Any) – Return type langchain.retrievers.tfidf.TFIDFRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.TimeWeightedVectorStoreRetriever(*, vectorstore, search_kwargs=None, memory_stream=None, decay_rate=0.01, k=4, other_score_keys=[], default_salience=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever combining embedding similarity with recency. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – search_kwargs (dict) – memory_stream (List[langchain.schema.Document]) – decay_rate (float) – k (int) – other_score_keys (List[str]) – default_salience (Optional[float]) – Return type None attribute decay_rate: float = 0.01 The exponential decay factor used as (1.0-decay_rate)**(hrs_passed). attribute default_salience: Optional[float] = None
https://api.python.langchain.com/en/latest/modules/retrievers.html
06ea1ea7a2ba-19
attribute default_salience: Optional[float] = None The salience to assign memories not retrieved from the vector store. None assigns no salience to documents not fetched from the vector store. attribute k: int = 4 The maximum number of documents to retrieve in a given call. attribute memory_stream: List[langchain.schema.Document] [Optional] The memory_stream of documents to search through. attribute other_score_keys: List[str] = [] Other keys in the metadata to factor into the score, e.g. β€˜importance’. attribute search_kwargs: dict [Optional] Keyword arguments to pass to the vectorstore similarity search. attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] The vectorstore to store documents and determine salience. async aadd_documents(documents, **kwargs)[source] Add documents to vectorstore. Parameters documents (List[langchain.schema.Document]) – kwargs (Any) – Return type List[str] add_documents(documents, **kwargs)[source] Add documents to vectorstore. Parameters documents (List[langchain.schema.Document]) – kwargs (Any) – Return type List[str] async aget_relevant_documents(query)[source] Return documents that are relevant to the query. Parameters query (str) – Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Return documents that are relevant to the query. Parameters query (str) – Return type List[langchain.schema.Document] get_salient_docs(query)[source] Return documents that are salient to the query. Parameters query (str) – Return type
https://api.python.langchain.com/en/latest/modules/retrievers.html