id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
59
127
14c2a1c4ad3b-121
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.NLPCloud[source]# Wrapper around NLPCloud large language models. To use, you should have the nlpcloud python package installed, and the environment variable NLPCLOUD_API_KEY set with your API key. Example
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-122
environment variable NLPCLOUD_API_KEY set with your API key. Example from langchain.llms import NLPCloud nlpcloud = NLPCloud(model="gpt-neox-20b") Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field bad_words: List[str] = []# List of tokens not allowed to be generated. field do_sample: bool = True# Whether to use sampling (True) or greedy decoding. field early_stopping: bool = False# Whether to stop beam search at num_beams sentences. field length_no_input: bool = True# Whether min_length and max_length should include the length of the input. field length_penalty: float = 1.0# Exponential penalty to the length. field max_length: int = 256# The maximum number of tokens to generate in the completion. field min_length: int = 1# The minimum number of tokens to generate in the completion. field model_name: str = 'finetuned-gpt-neox-20b'# Model name to use. field num_beams: int = 1# Number of beams for beam search. field num_return_sequences: int = 1# How many completions to generate for each prompt. field remove_end_sequence: bool = True# Whether or not to remove the end sequence token. field remove_input: bool = True# Remove input text from API response field repetition_penalty: float = 1.0# Penalizes repeated tokens. 1.0 means no penalty. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.7# What sampling temperature to use. field top_k: int = 50#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-123
What sampling temperature to use. field top_k: int = 50# The number of highest probability tokens to keep for top-k filtering. field top_p: int = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-124
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-125
get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-126
property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.OpenAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAI openai = OpenAI(model_name="text-davinci-003") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field batch_size: int = 20# Batch size to use when passing multiple documents to generate. field best_of: int = 1# Generates best_of completions server-side and returns the “best”. field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-127
field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_retries: int = 6# Maximum number of retries to make when generating. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003' (alias 'model')# Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field request_timeout: Optional[Union[float, Tuple[float, float]]] = None# Timeout for requests to OpenAI completion API. Default is 600 seconds. field streaming: bool = False# Whether to stream the results or not. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.7# What sampling temperature to use. field top_p: float = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-128
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-129
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-130
Get the sub prompts for llm call. get_token_ids(text: str) → List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) → int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) → int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-131
Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) → Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.OpenAIChat[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-132
pydantic model langchain.llms.OpenAIChat[source]# Wrapper around OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAIChat openaichat = OpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-133
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-134
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int][source]# Get the token IDs using the tiktoken package.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-135
Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-136
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.OpenLM[source]# Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field batch_size: int = 20# Batch size to use when passing multiple documents to generate. field best_of: int = 1# Generates best_of completions server-side and returns the “best”. field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_retries: int = 6# Maximum number of retries to make when generating. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003' (alias 'model')# Model name to use. field n: int = 1#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-137
Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field request_timeout: Optional[Union[float, Tuple[float, float]]] = None# Timeout for requests to OpenAI completion API. Default is 600 seconds. field streaming: bool = False# Whether to stream the results or not. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.7# What sampling temperature to use. field top_p: float = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-138
Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict# Return a dictionary of the LLM.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-139
dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]# Get the sub prompts for llm call. get_token_ids(text: str) → List[int]# Get the token IDs using the tiktoken package.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-140
Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) → int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) → int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-141
Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) → Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.Petals[source]# Wrapper around Petals Bloom models. To use, you should have the petals python package installed, and the
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-142
To use, you should have the petals python package installed, and the environment variable HUGGINGFACE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import petals petals = Petals() Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field client: Any = None# The client to use for the API calls. field do_sample: bool = True# Whether or not to use sampling; use greedy decoding otherwise. field max_length: Optional[int] = None# The maximum length of the sequence to be generated. field max_new_tokens: int = 256# The maximum number of new tokens to generate in the completion. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'bigscience/bloom-petals'# The model to use. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.7# What sampling temperature to use field tokenizer: Any = None# The tokenizer to use for the API calls. field top_k: Optional[int] = None# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 0.9# The cumulative probability for top-p sampling. field verbose: bool [Optional]# Whether to print out response text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-143
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-144
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-145
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-146
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.PipelineAI[source]# Wrapper around PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain import PipelineAI pipeline = PipelineAI(pipeline_key="") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field pipeline_key: str = ''# The id or tag of the target pipeline field pipeline_kwargs: Dict[str, Any] [Optional]# Holds any pipeline parameters valid for create call not explicitly specified. field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-147
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-148
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-149
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-150
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.PredictionGuard[source]# Wrapper around Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass it as a named parameter to the constructor. To use Prediction Guard’s API along with OpenAI models, set the environment variable OPENAI_API_KEY with your OpenAI API key as well. Example pgllm = PredictionGuard(model="MPT-7B-Instruct", token="my-access-token", output={ "type": "boolean" }) Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field max_tokens: int = 256# Denotes the number of tokens to predict per generation. field model: Optional[str] = 'MPT-7B-Instruct'# Model name to use. field output: Optional[Dict[str, Any]] = None# The output type or structure for controlling the LLM output. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.75# A non-negative float that tunes the degree of randomness in generation. field token: Optional[str] = None# Your Prediction Guard access token. field verbose: bool [Optional]# Whether to print out response text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-151
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-152
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-153
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-154
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.PromptLayerOpenAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAI openai = PromptLayerOpenAI(model_name="text-davinci-003") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-155
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-156
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-157
Get the sub prompts for llm call. get_token_ids(text: str) → List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) → int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) → int# Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-158
Predict message from messages. prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) → Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.PromptLayerOpenAIChat[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-159
pydantic model langchain.llms.PromptLayerOpenAIChat[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}# Set of special tokens that are allowed。 field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'# Set of special tokens that are not allowed。 field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-160
field streaming: bool = False# Whether to stream the results or not. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-161
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-162
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token IDs using the tiktoken package. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-163
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.RWKV[source]# Wrapper around RWKV language models. To use, you should have the rwkv python package installed, the pre-trained model file, and the model’s config information. Example from langchain.llms import RWKV model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32") # Simplest invocation response = model("Once upon a time, ") Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field CHUNK_LEN: int = 256# Batch size for prompt processing. field max_tokens_per_generation: int = 256# Maximum number of tokens to generate. field model: str [Required]# Path to the pre-trained RWKV model file. field penalty_alpha_frequency: float = 0.4# Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.. field penalty_alpha_presence: float = 0.4# Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.. field rwkv_verbose: bool = True# Print debug information.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-164
new topics.. field rwkv_verbose: bool = True# Print debug information. field strategy: str = 'cpu fp32'# Token context window. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 1.0# The temperature to use for sampling. field tokens_path: str [Required]# Path to the RWKV tokens file. field top_p: float = 0.5# The top-p value to use for sampling. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-165
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-166
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-167
save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.Replicate[source]# Wrapper around Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example from langchain.llms import Replicate replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478", input={"image_dimensions": "512x512"}) Validators
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-168
input={"image_dimensions": "512x512"}) Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-169
Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-170
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-171
save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.SagemakerEndpoint[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Validators raise_deprecation » all fields set_verbose » verbose
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-172
Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. field model_kwargs: Optional[Dict] = None# Key word arguments to pass to the model. field region_name: str = ''# The aws region where the Sagemaker model is deployed, eg. us-west-2. field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-173
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-174
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-175
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-176
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]# Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer )
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-177
"text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Validators raise_deprecation » all fields set_verbose » verbose field device: int = 0# Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_id: str = 'gpt2'# Hugging Face model_id to load the model. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field model_load_fn: Callable = <function _load_transformer># Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'transformers', 'torch']# Requirements to install on hardware to inference the model. field tags: Optional[List[str]] = None# Tags to add to the run trace. field task: str = 'text-generation'# Hugging Face task (“text-generation”, “text2text-generation” or “summarization”). field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-178
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-179
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-180
get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-181
property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.SelfHostedPipeline[source]# Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn )
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-182
model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators raise_deprecation » all fields set_verbose » verbose field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_load_fn: Callable [Required]# Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'torch']# Requirements to install on hardware to inference the model. field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-183
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-184
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM[source]# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-185
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-186
classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.StochasticAI[source]# Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Validators build_extra » all fields raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field api_url: str = ''# Model name to use. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field tags: Optional[List[str]] = None# Tags to add to the run trace. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-187
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-188
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-189
Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-190
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.VertexAI[source]# Wrapper around Google Vertex AI large language models. Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field credentials: Any = None# The default custom credentials (google.auth.credentials.Credentials) to use field location: str = 'us-central1'# The default location to use when making API calls. field max_output_tokens: int = 128# Token limit determines the maximum amount of text output from one prompt. field project: Optional[str] = None# The default GCP project to use when making Vertex API calls. field stop: Optional[List[str]] = None# Optional list of stop words to use when generating. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: float = 0.0# Sampling temperature, it controls the degree of randomness in token selection. field top_k: int = 40# How the model selects tokens for output, the next token is selected from field top_p: float = 0.95# Tokens are selected from most probable to least until the sum of their field tuned_model_name: Optional[str] = None# The name of a tuned model, if it’s provided, model_name is ignored. field verbose: bool [Optional]# Whether to print out response text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-191
field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-192
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-193
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-194
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field best_of: Optional[int] = None# Generates this many completions server-side and returns the “best”. field logprobs: bool = False# Whether to return log probabilities. field max_tokens: Optional[int] = None# Maximum number of tokens to generate. field min_tokens: Optional[int] = None# Minimum number of tokens to generate. field model_id: str = 'palmyra-instruct'# Model name to use. field n: Optional[int] = None# How many completions to generate. field presence_penalty: Optional[float] = None# Penalizes repeated tokens regardless of frequency. field repetition_penalty: Optional[float] = None# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-195
field stop: Optional[List[str]] = None# Sequences when completion generation will stop. field tags: Optional[List[str]] = None# Tags to add to the run trace. field temperature: Optional[float] = None# What sampling temperature to use. field top_p: Optional[float] = None# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. field writer_api_key: Optional[str] = None# Writer API key. field writer_org_id: Optional[str] = None# Writer organization ID. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-196
Predict text from text. async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult# Run the LLM on the given prompt and input.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-197
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. get_token_ids(text: str) → List[int]# Get the token present in the text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str# Predict text from text. predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage# Predict message from messages. save(file_path: Union[pathlib.Path, str]) → None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
14c2a1c4ad3b-198
save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. property lc_attributes: Dict# Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]# Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]# Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool# Return whether or not the class is serializable. previous Writer next Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html
fe18070e4f36-0
.rst .pdf Document Loaders Document Loaders# All different types of document loaders. class langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]# Loader that loads AZLyrics webpages. load() → List[langchain.schema.Document][source]# Load webpage. class langchain.document_loaders.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]# FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)# lazy_load() → Iterator[langchain.schema.Document][source]# A lazy loader for document content. load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.AirbyteJSONLoader(file_path: str)[source]# Loader that loads local airbyte json files. load() → List[langchain.schema.Document][source]# Load file. class langchain.document_loaders.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]# Loader that loads local airbyte json files. lazy_load() → Iterator[langchain.schema.Document][source]# Load Table. load() → List[langchain.schema.Document][source]# Load Table. pydantic model langchain.document_loaders.ApifyDatasetLoader[source]# Logic for loading documents from Apify datasets. field apify_client: Any = None# field dataset_id: str [Required]# The ID of the dataset on the Apify platform. field dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-1
field dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]# A custom function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]# Loads a query result from arxiv.org into a list of Documents. Each document represents one Document. The loader converts the original PDF format into the text. load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]# Loading logic for loading documents from Azure Blob Storage. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]# Loading logic for loading documents from Azure Blob Storage. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]# Loader that uses beautiful soup to parse HTML files. load() → List[langchain.schema.Document][source]# Load data into document objects.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-2
Load data into document objects. class langchain.document_loaders.BibtexLoader(file_path: str, *, parser: Optional[langchain.utilities.bibtex.BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]# Loads a bibtex file into a list of Documents. Each document represents one entry from the bibtex file. If a PDF file is present in the file bibtex field, the original PDF is loaded into the document text. If no such file entry is present, the abstract field is used instead. lazy_load() → Iterator[langchain.schema.Document][source]# Load bibtex file using bibtexparser and get the article texts plus the article metadata. See https://bibtexparser.readthedocs.io/en/master/ Returns a list of documents with the document.page_content in text format load() → List[langchain.schema.Document][source]# Load bibtex file documents from the given bibtex file path. See https://bibtexparser.readthedocs.io/en/master/ Parameters file_path – the path to the bibtex file Returns a list of documents with the document.page_content in text format class langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]# Loads a query result from BigQuery into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-3
are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.BiliBiliLoader(video_urls: List[str])[source]# Loader that loads bilibili transcripts. load() → List[langchain.schema.Document][source]# Load from bilibili url. class langchain.document_loaders.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]# Loader that loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() base_url: str# check_bs4() → None[source]# Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. download(path: str) → None[source]# Download a file from a url. Parameters path – Path to the file.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-4
Download a file from a url. Parameters path – Path to the file. folder_path: str# load() → List[langchain.schema.Document][source]# Load data into document objects. Returns List of documents. load_all_recursively: bool# parse_filename(url: str) → str[source]# Parse the filename from a url. Parameters url – Url to parse the filename from. Returns The filename. class langchain.document_loaders.BlockchainDocumentLoader(contract_address: str, blockchainType: langchain.document_loaders.blockchain.BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]# Loads elements from a blockchain smart contract into Langchain documents. The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet, Polygon mainnet, and Polygon Mumbai testnet. If no BlockchainType is specified, the default is Ethereum mainnet. The Loader uses the Alchemy API to interact with the blockchain. ALCHEMY_API_KEY environment variable must be set to use this loader. The API returns 100 NFTs per request and can be paginated using the startToken parameter. If get_all_tokens is set to True, the loader will get all tokens on the contract. Note that for contracts with a large number of tokens, this may take a long time (e.g. 10k tokens is 100 requests). Default value is false for this reason. The max_execution_time (sec) can be set to limit the execution time of the loader. Future versions of this loader can: Support additional Alchemy APIs (e.g. getTransactions, etc.)
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-5
Support additional Alchemy APIs (e.g. getTransactions, etc.) Support additional blockain APIs (e.g. Infura, Opensea, etc.) load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]# Loads a CSV file into a list of documents. Each document represents one row of the CSV file. Every row is converted into a key/value pair and outputted to a new line in the document’s page_content. The source for each document loaded from csv is set to the value of the file_path argument for all doucments by default. You can override this by setting the source_column argument to the name of a column in the CSV file. The source of each document will then be set to the value of the column with the name specified in source_column. Output Example:column1: value1 column2: value2 column3: value3 load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]# Loader that loads conversations from exported ChatGPT data. load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.CoNLLULoader(file_path: str)[source]# Load CoNLL-U files. load() → List[langchain.schema.Document][source]# Load from file path.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-6
load() → List[langchain.schema.Document][source]# Load from file path. class langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]# Loader that loads College Confidential webpages. load() → List[langchain.schema.Document][source]# Load webpage. class langchain.document_loaders.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]# Load Confluence pages. Port of https://llamahub.ai/l/confluence This currently supports username/api_key, Oauth2 login or personal access token authentication. Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Confluence API supports difference format of page content. The storage format is the raw XML representation for storage. The view format is the HTML representation for viewing with macros are rendered as though it is viewed by users. You can pass a enum content_format argument to load() to specify the content format, this is
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-7
a enum content_format argument to load() to specify the content format, this is set to ContentFormat.STORAGE by default. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Example from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50) Parameters url (str) – _description_ api_key (str, optional) – _description_, defaults to None username (str, optional) – _description_, defaults to None oauth2 (dict, optional) – _description_, defaults to {} token (str, optional) – _description_, defaults to None cloud (bool, optional) – _description_, defaults to True number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3 min_retry_seconds (Optional[int], optional) – defaults to 2 max_retry_seconds (Optional[int], optional) – defaults to 10 confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with Raises ValueError – Errors while validating input ImportError – Required dependencies not installed. is_public_page(page: dict) → bool[source]# Check if a page is publicly accessible.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-8
Check if a page is publicly accessible. load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: langchain.document_loaders.confluence.ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None) → List[langchain.schema.Document][source]# Parameters space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None label (Optional[str], optional) – Get all pages with this label, defaults to None cql (Optional[str], optional) – CQL Expression, defaults to None include_restricted_content (bool, optional) – defaults to False include_archived_content (bool, optional) – Whether to include archived content, defaults to False include_attachments (bool, optional) – defaults to False include_comments (bool, optional) – defaults to False content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50 max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a language, you’ll first need to install the appropriate Tesseract language pack. Raises ValueError – _description_ ImportError – _description_ Returns _description_
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-9
ValueError – _description_ ImportError – _description_ Returns _description_ Return type List[Document] paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]# Paginate the various methods to retrieve groups of pages. Unfortunately, due to page size, sometimes the Confluence API doesn’t match the limit value. If limit is >100 confluence seems to cap the response to 100. Also, due to the Atlassian Python package, we don’t get the “next” values from the “_links” key because they only return the value from the results key. So here, the pagination starts from 0 and goes until the max_pages, getting the limit number of pages with each request. We have to manually check if there are more docs based on the length of the returned list of pages, rather than just checking for the presence of a next key in the response like this page would have you do: https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/ Parameters retrieval_method (callable) – Function used to retrieve docs Returns List of documents Return type List process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]# process_doc(link: str) → str[source]# process_image(link: str, ocr_languages: Optional[str] = None) → str[source]# process_page(page: dict, include_attachments: bool, include_comments: bool, content_format: langchain.document_loaders.confluence.ContentFormat, ocr_languages: Optional[str] = None) → langchain.schema.Document[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-10
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: langchain.document_loaders.confluence.ContentFormat, ocr_languages: Optional[str] = None) → List[langchain.schema.Document][source]# Process a list of pages into a list of documents. process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]# process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]# process_xls(link: str) → str[source]# static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]# Validates proper combinations of init arguments class langchain.document_loaders.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]# Load Pandas DataFrames. load() → List[langchain.schema.Document][source]# Load from the dataframe. class langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]# Loader that loads Diffbot file json. load() → List[langchain.schema.Document][source]# Extract text from Diffbot on all the URLs and return Document instances
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-11
Extract text from Diffbot on all the URLs and return Document instances class langchain.document_loaders.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: typing.Union[typing.Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], typing.Type[langchain.document_loaders.text.TextLoader], typing.Type[langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]# Loading logic for loading documents from a directory. load() → List[langchain.schema.Document][source]# Load documents. load_file(item: pathlib.Path, path: pathlib.Path, docs: List[langchain.schema.Document], pbar: Optional[Any]) → None[source]# class langchain.document_loaders.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]# Load Discord chat logs. load() → List[langchain.schema.Document][source]# Load all chat messages. pydantic model langchain.document_loaders.DocugamiLoader[source]# Loader that loads processed docs from Docugami. To use, you should have the lxml python package installed. field access_token: Optional[str] = None# field api: str = 'https://api.docugami.com/v1preview1'# field docset_id: Optional[str] = None# field document_ids: Optional[Sequence[str]] = None# field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-12
field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None# field min_chunk_size: int = 32# load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.Docx2txtLoader(file_path: str)[source]# Loads a DOCX with docx2txt and chunks at character level. Defaults to check for local file, but if the file is a web path, it will download it to a temporary file, and use that, then clean up the temporary file after completion load() → List[langchain.schema.Document][source]# Load given path as single page. class langchain.document_loaders.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]# Loads a query result from DuckDB into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. load() → List[langchain.schema.Document][source]# Load data into document objects. pydantic model langchain.document_loaders.EmbaasBlobLoader[source]# Wrapper around embaas’s document byte loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader()
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-13
loader = EmbaasBlobLoader() blob = Blob.from_path(path="example.mp3") documents = loader.parse(blob=blob) # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } ) blob = Blob.from_path(path="example.pdf") documents = loader.parse(blob=blob) lazy_parse(blob: langchain.document_loaders.blob_loaders.schema.Blob) → Iterator[langchain.schema.Document][source]# Lazy parsing interface. Subclasses are required to implement this method. Parameters blob – Blob instance Returns Generator of documents pydantic model langchain.document_loaders.EmbaasLoader[source]# Wrapper around embaas’s document loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasLoader loader = EmbaasLoader(file_path="example.mp3") documents = loader.load() # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( file_path="example.pdf", params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } ) documents = loader.load() Validators
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-14
} ) documents = loader.load() Validators validate_blob_loader » blob_loader validate_environment » all fields field blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None# The blob loader to use. If not provided, a default one will be created. field file_path: str [Required]# The path to the file to load. lazy_load() → Iterator[langchain.schema.Document][source]# Load the documents from the file path lazily. load() → List[langchain.schema.Document][source]# Load data into document objects. load_and_split(text_splitter: Optional[langchain.text_splitter.TextSplitter] = None) → List[langchain.schema.Document][source]# Load documents and split into chunks. class langchain.document_loaders.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]# EverNote Loader. Loads an EverNote notebook export file e.g. my_notebook.enex into Documents. Instructions on producing this file can be found at https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML Currently only the plain text in the note is extracted and stored as the contents of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc. but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored as metadata on the Document. Parameters file_path (str) – The path to the notebook export with a .enex extension load_single_document (bool) – Whether or not to concatenate the content of all notes into a single long Document.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-15
notes into a single long Document. True (If this is set to) – the ‘source’ which contains the file name of the export. load() → List[langchain.schema.Document][source]# Load documents from EverNote export file. class langchain.document_loaders.FacebookChatLoader(path: str)[source]# Loader that loads Facebook messages json directory dump. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]# query# The FQL query string to execute. Type str page_content_field# The field that contains the content of each page. Type str secret# The secret key for authenticating to FaunaDB. Type str metadata_fields# Optional list of field names to include in metadata. Type Optional[Sequence[str]] lazy_load() → Iterator[langchain.schema.Document][source]# A lazy loader for document content. load() → List[langchain.schema.Document][source]# Load data into document objects. class langchain.document_loaders.FigmaFileLoader(access_token: str, ids: str, key: str)[source]# Loader that loads Figma file json. load() → List[langchain.schema.Document][source]# Load file class langchain.document_loaders.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]# Loading logic for loading documents from GCS. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-16
Loading logic for loading documents from GCS. load() → List[langchain.schema.Document][source]# Load documents. pydantic model langchain.document_loaders.GitHubIssuesLoader[source]# Validators validate_environment » all fields validate_since » since field assignee: Optional[str] = None# Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user. field creator: Optional[str] = None# Filter on the user that created the issue. field direction: Optional[Literal['asc', 'desc']] = None# The direction to sort the results by. Can be one of: ‘asc’, ‘desc’. field include_prs: bool = True# If True include Pull Requests in results, otherwise ignore them. field labels: Optional[List[str]] = None# Label names to filter one. Example: bug,ui,@high. field mentioned: Optional[str] = None# Filter on a user that’s mentioned in the issue. field milestone: Optional[Union[int, Literal['*', 'none']]] = None# If integer is passed, it should be a milestone’s number field. If the string ‘*’ is passed, issues with any milestone are accepted. If the string ‘none’ is passed, issues without milestones are returned. field since: Optional[str] = None# Only show notifications updated after the given time. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ. field sort: Optional[Literal['created', 'updated', 'comments']] = None# What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’. Default is ‘created’. field state: Optional[Literal['open', 'closed', 'all']] = None#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-17
field state: Optional[Literal['open', 'closed', 'all']] = None# Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’. lazy_load() → Iterator[langchain.schema.Document][source]# Get issues of a GitHub repository. Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes load() → List[langchain.schema.Document][source]# Get issues of a GitHub repository. Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes parse_issue(issue: dict) → langchain.schema.Document[source]# Create Document objects from a list of GitHub issues. property query_params: str# property url: str# class langchain.document_loaders.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]# Loads files from a Git repository into a list of documents. Repository can be local on disk available at repo_path, or remote at clone_url that will be cloned to repo_path. Currently supports only text files. Each document represents one file in the repository. The path points to the local Git repository, and the branch specifies the branch to load files from. By default, it loads from the main branch. load() → List[langchain.schema.Document][source]# Load data into document objects.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-18
Load data into document objects. class langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]# Load GitBook data. load from either a single page, or load all (relative) paths in the navbar. load() → List[langchain.schema.Document][source]# Fetch text from one single GitBook page. class langchain.document_loaders.GoogleApiClient(credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json'))[source]# A Generic Google Api Client. To use, you should have the google_auth_oauthlib,youtube_transcript_api,google python package installed. As the google api expects credentials you need to set up a google account and register your Service. “https://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')# service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')# token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')# classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]# Validate that either folder_id or document_ids is set, but not both.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-19
Validate that either folder_id or document_ids is set, but not both. class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]# Loader that loads all Videos from a Channel To use, you should have the googleapiclient,youtube_transcript_api python package installed. As the service needs a google_api_client, you first have to initialize the GoogleApiClient. Additionally you have to either provide a channel name or a list of videoids “https://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient from langchain.document_loaders import GoogleApiYoutubeLoader google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) loader = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name = "CodeAesthetic" ) load.load() add_video_info: bool = True# captions_language: str = 'en'# channel_name: Optional[str] = None# continue_on_failure: bool = False# google_api_client: langchain.document_loaders.youtube.GoogleApiClient# load() → List[langchain.schema.Document][source]# Load documents. classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]# Validate that either folder_id or document_ids is set, but not both. video_ids: Optional[List[str]] = None# pydantic model langchain.document_loaders.GoogleDriveLoader[source]# Loader that loads Google Docs from Google Drive.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-20
Loader that loads Google Docs from Google Drive. Validators validate_credentials_path » credentials_path validate_inputs » all fields field credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')# field document_ids: Optional[List[str]] = None# field file_ids: Optional[List[str]] = None# field file_types: Optional[Sequence[str]] = None# field folder_id: Optional[str] = None# field load_trashed_files: bool = False# field recursive: bool = False# field service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')# field token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')# load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.GutenbergLoader(file_path: str)[source]# Loader that uses urllib to load .txt web files. load() → List[langchain.schema.Document][source]# Load file. class langchain.document_loaders.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]# Load Hacker News data from either main page results or the comments page. load() → List[langchain.schema.Document][source]# Get important HN webpage information. Components are: title content source url, time of post author of the post number of comments rank of the post load_comments(soup_info: Any) → List[langchain.schema.Document][source]# Load comments from a HN post. load_results(soup: Any) → List[langchain.schema.Document][source]# Load items from an HN page.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html
fe18070e4f36-21
Load items from an HN page. class langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]# Loading logic for loading documents from the Hugging Face Hub. lazy_load() → Iterator[langchain.schema.Document][source]# Load documents lazily. load() → List[langchain.schema.Document][source]# Load documents. class langchain.document_loaders.IFixitLoader(web_path: str)[source]# Load iFixit repair guides, device wikis and answers. iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs and web scraping. load() → List[langchain.schema.Document][source]# Load data into document objects. load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[langchain.schema.Document][source]# load_guide(url_override: Optional[str] = None) → List[langchain.schema.Document][source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/document_loaders.html