id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
---|---|---|
20fe4c734ec8-3
|
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
20fe4c734ec8-4
|
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
20fe4c734ec8-5
|
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
20fe4c734ec8-6
|
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
20fe4c734ec8-7
|
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
20fe4c734ec8-8
|
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Predibase¶
Predibase
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.predibase.Predibase.html
|
bbd8dd29d91f-0
|
langchain.llms.openai.OpenAI¶
class langchain.llms.openai.OpenAI[source]¶
Bases: BaseOpenAI
OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-1
|
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-2
|
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-3
|
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-4
|
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-5
|
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-6
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-7
|
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-8
|
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-9
|
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-10
|
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
Examples using OpenAI¶
Cohere Reranker
Google Serper API
Human as a tool
OpenWeatherMap API
Search Tools
Zapier Natural Language Actions API
Gradio Tools
SceneXplain
Entity Memory with SQLite storage
Argilla
PromptLayer
Streamlit
WandB Tracing
Comet
Aim
Weights & Biases
OpenAI
Rebuff
MLflow
Google Serper
Helicone
Shale Protocol
WhyLabs
ClearML
Ray Serve
Log, Trace, and Monitor Langchain LLM Calls
Portkey
Chat Over Documents with Vectara
Vectara Text Generation
CSV Agent
Xorbits Agent
Jira
Spark Dataframe Agent
Python Agent
SQL Database Agent
Natural Language APIs
JSON Agent
Github Toolkit
Pandas Dataframe Agent
OpenAPI agents
Psychic
Docugami
Caching integrations
Question Answering Benchmarking: State of the Union Address
Question Answering Benchmarking: Paul Graham Essay
Evaluating an OpenAPI Chain
Data Augmented Question Answering
Question Answering
Agent VectorDB Question Answering Benchmarking
HTTP request chain
OpenAPI chain
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
bbd8dd29d91f-11
|
Agent VectorDB Question Answering Benchmarking
HTTP request chain
OpenAPI chain
HuggingGPT
Context aware text splitting and QA / Chat
Question answering over a group chat messages using Activeloop’s DeepLake
Perform context-aware text splitting
Retrieve from vector stores directly
Retrieve as you generate with FLARE
Improve document indexing with HyDE
QA using Activeloop’s DeepLake
Graph QA
Tree of Thought (ToT) example
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Bash chain
LLM Symbolic Math
Summarization checker chain
Self-checking chain
Agent Debates with Tools
Weaviate self-querying
Chroma self-querying
DeepLake self-querying
Self-querying with Pinecone
Self-querying with MyScale
Qdrant self-querying
Lost in the middle: The problem with long contexts
How to add memory to a Multi-Input Chain
Conversation Knowledge Graph Memory
ConversationTokenBufferMemory
How to add Memory to an LLMChain
How to use multiple memory classes in the same chain
How to customize conversational memory
ConversationSummaryBufferMemory
Multiple callback handlers
Token counting
Logging to file
Multi-Input Tools
Defining Custom Tools
Tool Input Schema
Human-in-the-loop Tool Validation
Combine agents and vector stores
Access intermediate steps
Timeouts for agents
Streaming final agent output
Cap the max number of iterations
Async API
Tracking token usage
Serialization
Retry parser
Datetime parser
Pydantic (JSON) parser
Router
Transformation
Vector store-augmented text generation
FLARE
Hypothetical Document Embeddings
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
|
2f9839c442d9-0
|
langchain.llms.tongyi.Tongyi¶
class langchain.llms.tongyi.Tongyi[source]¶
Bases: LLM
Tongyi Qwen large language models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Tongyi
Tongyi = tongyi()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param dashscope_api_key: Optional[str] = None¶
Dashscope api key provide by alicloud.
param max_retries: int = 10¶
Maximum number of retries to make when generating.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
param model_name: str = 'qwen-plus-v1'¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param prefix_messages: List [Optional]¶
Series of messages for Chat input.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param top_p: float = 0.8¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
2f9839c442d9-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using Tongyi¶
Tongyi Qwen
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.tongyi.Tongyi.html
|
316771404839-0
|
langchain.llms.chatglm.ChatGLM¶
class langchain.llms.chatglm.ChatGLM[source]¶
Bases: LLM
ChatGLM LLM service.
Example
from langchain.llms import ChatGLM
endpoint_url = (
"http://127.0.0.1:8000"
)
ChatGLM_llm = ChatGLM(
endpoint_url=endpoint_url
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint_url: str = 'http://127.0.0.1:8000/'¶
Endpoint URL to use.
param history: List[List] = []¶
History of the conversation
param max_token: int = 20000¶
Max token allowed to pass to the model.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.1¶
LLM model temperature from 0 to 10.
param top_p: float = 0.7¶
Top P for nucleus sampling from 0 to 1
param verbose: bool [Optional]¶
Whether to print out response text.
param with_history: bool = False¶
Whether to use history or not
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-1
|
param with_history: bool = False¶
Whether to use history or not
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-2
|
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-3
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-4
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-5
|
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-6
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-7
|
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
316771404839-8
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using ChatGLM¶
ChatGLM
|
https://api.python.langchain.com/en/latest/llms/langchain.llms.chatglm.ChatGLM.html
|
2443993f9d77-0
|
langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain¶
class langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain[source]¶
Bases: BaseQAWithSourcesChain
Question-answering with sources over a vector database.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param combine_documents_chain: BaseCombineDocumentsChain [Required]¶
Chain to use to combine documents.
param k: int = 4¶
Number of results to return from store
param max_tokens_limit: int = 3375¶
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-1
|
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param reduce_k_below_max_tokens: bool = False¶
Reduce the number of results to return from store based on tokens limit
param return_source_documents: bool = False¶
Return the source documents.
param search_kwargs: Dict[str, Any] [Optional]¶
Extra search args.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶
Vector Database to connect to.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-2
|
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-3
|
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-4
|
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-5
|
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_chain_type(llm: BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) → BaseQAWithSourcesChain¶
Load chain from chain type.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-6
|
classmethod from_llm(llm: BaseLanguageModel, document_prompt: BasePromptTemplate = PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n{context}\nQuestion: {question}\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt: BasePromptTemplate = PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-7
|
or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-8
|
\n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-9
|
\n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-10
|
and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) → BaseQAWithSourcesChain¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-11
|
Construct the chain from an LLM.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-12
|
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-13
|
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
2443993f9d77-14
|
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain.html
|
7d5f30131ae5-0
|
langchain.chains.combine_documents.base.BaseCombineDocumentsChain¶
class langchain.chains.combine_documents.base.BaseCombineDocumentsChain[source]¶
Bases: Chain, ABC
Base interface for chains combining documents.
Subclasses of this chain deal with combining documents in a variety of
ways. This base class exists to add some uniformity in the interface these types
of chains should expose. Namely, they expect an input key related to the documents
to use (default input_documents), and then also expose a method to calculate
the length of a prompt from documents (useful for outside callers to use to
determine whether it’s safe to pass a list of documents into this chain or whether
that will longer than the context length).
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-1
|
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-2
|
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-3
|
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
abstract async acombine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶
Combine documents into a single string.
Parameters
docs – List[Document], the documents to combine
**kwargs – Other parameters to use in combining documents, often
other inputs to the prompt.
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-4
|
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
abstract combine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶
Combine documents into a single string.
Parameters
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-5
|
Combine documents into a single string.
Parameters
docs – List[Document], the documents to combine
**kwargs – Other parameters to use in combining documents, often
other inputs to the prompt.
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-6
|
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-7
|
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int][source]¶
Return the prompt length given the documents passed in.
This can be used by a caller to determine whether passing in a list
of documents would exceed a certain prompt length. This useful when
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
docs – List[Document], a list of documents to use to calculate the
total prompt length.
Returns
Returns None if the method does not depend on the prompt length,
otherwise the length of the prompt in tokens.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-8
|
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
7d5f30131ae5-9
|
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using BaseCombineDocumentsChain¶
!pip install bs4
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
|
98fbe3959428-0
|
langchain.chains.api.openapi.response_chain.APIResponderChain¶
class langchain.chains.api.openapi.response_chain.APIResponderChain[source]¶
Bases: LLMChain
Get the response parser.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: BaseLanguageModel [Required]¶
Language model to call.
param llm_kwargs: dict [Optional]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-1
|
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Required]¶
Prompt object to use.
param return_final_only: bool = True¶
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-2
|
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-3
|
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-4
|
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-5
|
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-6
|
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶
Create outputs from response.
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, verbose: bool = True, **kwargs: Any) → LLMChain[source]¶
Get the response parser.
classmethod from_orm(obj: Any) → Model¶
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-7
|
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-8
|
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-9
|
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
98fbe3959428-10
|
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.response_chain.APIResponderChain.html
|
99943ef614b7-0
|
langchain.chains.sql_database.query.SQLInputWithTables¶
class langchain.chains.sql_database.query.SQLInputWithTables[source]¶
Input for a SQL Chain.
question: str¶
table_names_to_use: List[str]¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.SQLInputWithTables.html
|
a25bf71d4a19-0
|
langchain.chains.conversational_retrieval.base.ChatVectorDBChain¶
class langchain.chains.conversational_retrieval.base.ChatVectorDBChain[source]¶
Bases: BaseConversationalRetrievalChain
Chain for chatting with a vector database.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param combine_docs_chain: BaseCombineDocumentsChain [Required]¶
The chain used to combine any retrieved documents.
param get_chat_history: Optional[Callable[[List[CHAT_TURN_TYPE]], str]] = None¶
An optional function to get a string of the chat history.
If None is provided, will use a default.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-1
|
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'answer'¶
The output key to return the final answer of this chain in.
param question_generator: LLMChain [Required]¶
The chain used to generate a new question for the sake of retrieval.
This chain will take in the current question (with variable question)
and any chat history (with variable chat_history) and will produce
a new standalone question to be used later on.
param rephrase_question: bool = True¶
Whether or not to pass the new generated question to the combine_docs_chain.
If True, will pass the new generated question along.
If False, will only use the new generated question for retrieval and pass the
original question along to the combine_docs_chain.
param return_generated_question: bool = False¶
Return the generated question as part of the final result.
param return_source_documents: bool = False¶
Return the retrieved source documents as part of the final result.
param search_kwargs: dict [Optional]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param top_k_docs_for_context: int = 4¶
param vectorstore: VectorStore [Required]¶
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-2
|
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-3
|
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-4
|
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-5
|
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-6
|
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, vectorstore: VectorStore, condense_question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseConversationalRetrievalChain[source]¶
Load chain from LLM.
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-7
|
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-8
|
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
a25bf71d4a19-9
|
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property input_keys: List[str]¶
Input keys.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
c3b11c537059-0
|
langchain.chains.sequential.SimpleSequentialChain¶
class langchain.chains.sequential.SimpleSequentialChain[source]¶
Bases: Chain
Simple chain where the outputs of one step feed directly into next.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param chains: List[langchain.chains.base.Chain] [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param strip_outputs: bool = False¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-1
|
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-2
|
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-3
|
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-4
|
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-5
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_orm(obj: Any) → Model¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-6
|
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-7
|
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
c3b11c537059-8
|
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using SimpleSequentialChain¶
Zapier Natural Language Actions API
Rebuff
Baseten
Predibase
Replicate
Transformation
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.sequential.SimpleSequentialChain.html
|
06d798a04242-0
|
langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser¶
class langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser[source]¶
Bases: BaseOutputParser
Parse the request and error tags.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
async aparse(text: str) → T¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
|
06d798a04242-1
|
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_orm(obj: Any) → Model¶
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
|
06d798a04242-2
|
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
parse(llm_output: str) → str[source]¶
Parse the request and error tags.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
|
06d798a04242-3
|
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
|
06d798a04242-4
|
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
|
9dca8d140b7b-0
|
langchain.chains.query_constructor.base.load_query_constructor_chain¶
langchain.chains.query_constructor.base.load_query_constructor_chain(llm: BaseLanguageModel, document_contents: str, attribute_info: List[AttributeInfo], examples: Optional[List] = None, allowed_comparators: Optional[Sequence[Comparator]] = None, allowed_operators: Optional[Sequence[Operator]] = None, enable_limit: bool = False, **kwargs: Any) → LLMChain[source]¶
Load a query constructor chain.
Parameters
llm – BaseLanguageModel to use for the chain.
document_contents – The contents of the document to be queried.
attribute_info – A list of AttributeInfo objects describing
the attributes of the document.
examples – Optional list of examples to use for the chain.
allowed_comparators – An optional list of allowed comparators.
allowed_operators – An optional list of allowed operators.
enable_limit – Whether to enable the limit operator. Defaults to False.
**kwargs –
Returns
A LLMChain that can be used to construct queries.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.load_query_constructor_chain.html
|
04ccc9151608-0
|
langchain.chains.graph_qa.cypher.extract_cypher¶
langchain.chains.graph_qa.cypher.extract_cypher(text: str) → str[source]¶
Extract Cypher code from a text.
Parameters
text – Text to extract Cypher code from.
Returns
Cypher code extracted from the text.
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.extract_cypher.html
|
8cfdea9f2ddc-0
|
langchain.chains.query_constructor.ir.Operator¶
class langchain.chains.query_constructor.ir.Operator(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the operations.
AND = 'and'¶
OR = 'or'¶
NOT = 'not'¶
|
https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Operator.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.