id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
43bd35617771-10 | and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs: Any) → BaseQAWithSourcesChain¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html |
43bd35617771-11 | Construct the chain from an LLM.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the | https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html |
43bd35617771-12 | *args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_naming » all fields¶
Fix backwards compatibility in naming. | https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html |
43bd35617771-13 | validator validate_naming » all fields¶
Fix backwards compatibility in naming.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
Examples using RetrievalQAWithSourcesChain¶
Weaviate
Marqo
Psychic
QA over Documents
WebResearchRetriever | https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html |
2db04487a658-0 | langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-1 | class langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, sequential_chain: SequentialChain, llm: Optional[BaseLanguageModel] = None, create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-2 | Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), input_key: str = 'query', output_key: str = 'result', max_checks: int = 2)[source]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-3 | Bases: Chain
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)¶
[Deprecated]
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None. | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-4 | Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)¶
[Deprecated]
param create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)¶
[Deprecated]
param llm: Optional[BaseLanguageModel] = None¶
[Deprecated] LLM wrapper to use.
param max_checks: int = 2¶
Maximum number of times to check the assertions. Default to double-checking.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-5 | and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)¶
[Deprecated]
param sequential_chain: SequentialChain [Required]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value. | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-6 | will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys. | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-7 | Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-8 | apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?" | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-9 | question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …} | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-10 | classmethod from_llm(llm: BaseLanguageModel, create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-11 | partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), verbose: bool = False, **kwargs: Any) → LLMSummarizationCheckerChain[source]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-12 | invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
validator raise_deprecation » all fields[source]¶
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-13 | with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
2db04487a658-14 | to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
Examples using LLMSummarizationCheckerChain¶
Summarization checker chain | https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html |
124797eef519-0 | langchain.chains.query_constructor.ir.FilterDirective¶
class langchain.chains.query_constructor.ir.FilterDirective[source]¶
Bases: Expr, ABC
A filtering expression.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
accept(visitor: Visitor) → Any¶
Accept a visitor.
Parameters
visitor – visitor to accept
Returns
result of visiting | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.FilterDirective.html |
24effb6d66d3-0 | langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic¶
langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic(pydantic_schema: Any, llm: BaseLanguageModel) → Chain[source]¶
Creates a chain that extracts information from a passage using pydantic schema.
Parameters
pydantic_schema – The pydantic schema of the entities to extract.
llm – The language model to use.
Returns
Chain that can be used to extract information from a passage.
Examples using create_extraction_chain_pydantic¶
Extraction with OpenAI Functions
Extraction | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html |
0aa8dd68accc-0 | langchain.chains.sql_database.query.SQLInput¶
class langchain.chains.sql_database.query.SQLInput[source]¶
Bases: TypedDict
Input for a SQL Chain.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
question
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.SQLInput.html |
0aa8dd68accc-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
question: str¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.SQLInput.html |
1a6a3f042d6e-0 | langchain.chains.api.openapi.chain.OpenAPIEndpointChain¶
class langchain.chains.api.openapi.chain.OpenAPIEndpointChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_request_chain: LLMChain, api_response_chain: Optional[LLMChain] = None, api_operation: APIOperation, requests: Requests = None, param_mapping: _ParamMapping, return_intermediate_steps: bool = False, instructions_key: str = 'instructions', output_key: str = 'output', max_text_length: Optional[ConstrainedIntValue] = None)[source]¶
Bases: Chain, BaseModel
Chain interacts with an OpenAPI endpoint using natural language.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_operation: APIOperation [Required]¶
param api_request_chain: LLMChain [Required]¶
param api_response_chain: Optional[LLMChain] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-1 | Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param param_mapping: _ParamMapping [Required]¶
param requests: Requests [Optional]¶
param return_intermediate_steps: bool = False¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-2 | only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False. | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-3 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-4 | addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
deserialize_json_input(serialized_args: str) → dict[source]¶
Use the serialized typescript dictionary.
Resolve the path, query params dict, and optional requestBody dict.
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …} | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-5 | # -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_api_operation(operation: APIOperation, llm: BaseLanguageModel, requests: Optional[Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → OpenAPIEndpointChain[source]¶
Create an OpenAPIEndpointChain from an operation and a spec.
classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: BaseLanguageModel, requests: Optional[Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) → OpenAPIEndpointChain[source]¶
Create an OpenAPIEndpoint from a spec at the specified url.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False, | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-6 | return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?" | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
1a6a3f042d6e-7 | question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using OpenAPIEndpointChain¶
Evaluating an OpenAPI Chain
OpenAPI chain | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html |
7246d2341f2e-0 | langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain¶
langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm: BaseLanguageModel, **kwargs: Any) → LLMChain[source]¶
Create a question answering chain that returns an answer with sources.
Parameters
llm – Language model to use for the chain.
**kwargs – Keyword arguments to pass to create_qa_with_structure_chain.
Returns
Chain (LLMChain) that can be used to answer questions with citations.
Examples using create_qa_with_sources_chain¶
Structure answers with OpenAI functions
Retrieval QA using OpenAI functions | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain.html |
bc1ecf546841-0 | langchain.chains.router.llm_router.LLMRouterChain¶
class langchain.chains.router.llm_router.LLMRouterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, llm_chain: LLMChain)[source]¶
Bases: RouterChain
A router chain that uses an LLM chain to perform routing.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm_chain: LLMChain [Required]¶
LLM chain used to perform routing
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain, | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-1 | This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-2 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-3 | to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async aroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Route¶
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-4 | directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate, **kwargs: Any) → LLMRouterChain[source]¶
Convenience constructor.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory. | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-5 | Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
route(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Route¶
Route inputs to a destination chain.
Parameters
inputs – inputs to the chain
callbacks – callbacks to use for the chain
Returns
a Route object
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects. | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-6 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_prompt » all fields[source]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
bc1ecf546841-7 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Keys expected to be in the chain output.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
Examples using LLMRouterChain¶
Router | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html |
0169e8cdf0ee-0 | langchain.chains.combine_documents.refine.RefineDocumentsChain¶
class langchain.chains.combine_documents.refine.RefineDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', initial_llm_chain: LLMChain, refine_llm_chain: LLMChain, document_variable_name: str, initial_response_name: str, document_prompt: BasePromptTemplate = None, return_intermediate_steps: bool = False)[source]¶
Bases: BaseCombineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
This algorithm first calls initial_llm_chain on the first document, passing
that first document in with the variable name document_variable_name, and
produces a new variable with the variable name initial_response_name.
Then, it loops over every remaining document. This is called the “refine” step.
It calls refine_llm_chain,
passing in that document with the variable name document_variable_name
as well as the previous response with the variable name initial_response_name.
Example
from langchain.chains import RefineDocumentsChain, LLMChain
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
llm = OpenAI() | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-1 | )
document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
initial_response_name = "prev_response"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
prompt_refine = PromptTemplate.from_template(
"Here's your first summary: {prev_response}. "
"Now add to it based on the following context: {context}"
)
llm_chain_refine = LLMChain(llm=llm, prompt=prompt_refine)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param document_prompt: BasePromptTemplate [Optional]¶
Prompt to use to format each document, gets passed to format_document. | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-2 | Prompt to use to format each document, gets passed to format_document.
param document_variable_name: str [Required]¶
The variable name in the initial_llm_chain to put the documents in.
If only one variable in the initial_llm_chain, this need not be provided.
param initial_llm_chain: LLMChain [Required]¶
LLM chain to use on initial document.
param initial_response_name: str [Required]¶
The variable name to format the initial response in when refining.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param refine_llm_chain: LLMChain [Required]¶
LLM chain to use when refining.
param return_intermediate_steps: bool = False¶
Return the results of the refine steps in the output.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-3 | param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys. | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-4 | Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Async combine by mapping a first chain over all, then stuffinginto a final chain. | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-5 | Async combine by mapping a first chain over all, then stuffinginto a final chain.
Parameters
docs – List of documents to combine
callbacks – Callbacks to be passed through
**kwargs – additional parameters to be passed to LLM calls (like other
input variables besides the documents)
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects. | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-6 | these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
combine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs – List of documents to combine
callbacks – Callbacks to be passed through
**kwargs – additional parameters to be passed to LLM calls (like other
input variables besides the documents)
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …} | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-7 | # -> {“_type”: “foo”, “verbose”: False, …}
validator get_default_document_variable_name » all fields[source]¶
Get default document variable name, if not provided.
validator get_return_intermediate_steps » all fields[source]¶
For backwards compatibility.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶
Return the prompt length given the documents passed in.
This can be used by a caller to determine whether passing in a list
of documents would exceed a certain prompt length. This useful when
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
docs – List[Document], a list of documents to use to calculate the
total prompt length. | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-8 | total prompt length.
Returns
Returns None if the method does not depend on the prompt length,
otherwise the length of the prompt in tokens.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?" | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
0169e8cdf0ee-9 | question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html |
8dfd78f958bf-0 | langchain.chains.router.base.RouterChain¶
class langchain.chains.router.base.RouterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]¶
Bases: Chain, ABC
Chain that outputs the name of a destination chain and the inputs to it.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param memory: Optional[langchain.schema.memory.BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain, | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-1 | This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-2 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-3 | to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async aroute(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Route[source]¶
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-4 | directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-5 | Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
route(inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Route[source]¶
Route inputs to a destination chain.
Parameters
inputs – inputs to the chain
callbacks – callbacks to use for the chain
Returns
a Route object
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects. | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-6 | these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
abstract property input_keys: List[str]¶
Keys expected to be in the chain input.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids. | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
8dfd78f958bf-7 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Keys expected to be in the chain output.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.router.base.RouterChain.html |
df740162b5cd-0 | langchain.chains.api.openapi.requests_chain.APIRequesterChain¶
class langchain.chains.api.openapi.requests_chain.APIRequesterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶
Bases: LLMChain
Get the request parser.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: BaseLanguageModel [Required]¶
Language model to call.
param llm_kwargs: dict [Optional]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-1 | There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Required]¶
Prompt object to use.
param return_final_only: bool = True¶
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-2 | Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results. | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-3 | Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-4 | apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-5 | method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶
Create outputs from response.
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-6 | Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm_and_typescript(llm: BaseLanguageModel, typescript_definition: str, verbose: bool = True, **kwargs: Any) → LLMChain[source]¶
Get the request parser.
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory. | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-7 | Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects. | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-8 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
df740162b5cd-9 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.requests_chain.APIRequesterChain.html |
110c737249c8-0 | langchain.chains.openai_functions.qa_with_structure.AnswerWithSources¶
class langchain.chains.openai_functions.qa_with_structure.AnswerWithSources(*, answer: str, sources: List[str])[source]¶
Bases: BaseModel
An answer to the question, with sources.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param answer: str [Required]¶
Answer to the question that was asked
param sources: List[str] [Required]¶
List of sources used to answer the question | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.AnswerWithSources.html |
b7403c2ac3e1-0 | langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence¶
class langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence(*, fact: str, substring_quote: List[str])[source]¶
Bases: BaseModel
Class representing a single statement.
Each fact has a body and a list of sources.
If there are multiple facts make sure to break them apart
such that each one only uses a set of sources that are relevant to it.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param fact: str [Required]¶
Body of the sentence, as part of a response
param substring_quote: List[str] [Required]¶
Each source should be a direct quote from the context, as a substring of the original content
get_spans(context: str) → Iterator[str][source]¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.citation_fuzzy_match.FactWithEvidence.html |
f9d6acdf8050-0 | langchain.chains.openai_functions.base.create_structured_output_chain¶
langchain.chains.openai_functions.base.create_structured_output_chain(output_schema: Union[Dict[str, Any], Type[BaseModel]], llm: BaseLanguageModel, prompt: BasePromptTemplate, *, output_parser: Optional[BaseLLMOutputParser] = None, **kwargs: Any) → LLMChain[source]¶
Create an LLMChain that uses an OpenAI function to get a structured output.
Parameters
output_schema – Either a dictionary or pydantic.BaseModel class. If a dictionary
is passed in, it’s assumed to already be a valid JsonSchema.
For best results, pydantic.BaseModels should have docstrings describing what
the schema represents and descriptions for the parameters.
llm – Language model to use, assumed to support the OpenAI function-calling API.
prompt – BasePromptTemplate to pass to the model.
output_parser – BaseLLMOutputParser to use for parsing model outputs. By default
will be inferred from the function types. If pydantic.BaseModels are passed
in, then the OutputParser will try to parse outputs using those. Otherwise
model outputs will simply be parsed as JSON.
Returns
An LLMChain that will pass the given function to the model.
Example
from langchain.chains.openai_functions import create_structured_output_chain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from pydantic import BaseModel, Field
class Dog(BaseModel):
"""Identifying information about a dog."""
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: Optional[str] = Field(None, description="The dog's favorite food") | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_chain.html |
f9d6acdf8050-1 | fav_food: Optional[str] = Field(None, description="The dog's favorite food")
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
prompt_msgs = [
SystemMessage(
content="You are a world class algorithm for extracting information in structured formats."
),
HumanMessage(content="Use the given format to extract information from the following input:"),
HumanMessagePromptTemplate.from_template("{input}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
prompt = ChatPromptTemplate(messages=prompt_msgs)
chain = create_structured_output_chain(Dog, llm, prompt)
chain.run("Harry was a chubby brown beagle who loved chicken")
# -> Dog(name="Harry", color="brown", fav_food="chicken")
Examples using create_structured_output_chain¶
Using OpenAI functions | https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_chain.html |
8a066912c231-0 | langchain.chains.query_constructor.ir.Comparator¶
class langchain.chains.query_constructor.ir.Comparator(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: str, Enum
Enumerator of the comparison operators.
Methods
__init__(*args, **kwds)
capitalize()
Return a capitalized version of the string.
casefold()
Return a version of the string suitable for caseless comparisons.
center(width[, fillchar])
Return a centered string of length width.
count(sub[, start[, end]])
Return the number of non-overlapping occurrences of substring sub in string S[start:end].
encode([encoding, errors])
Encode the string using the codec registered for encoding.
endswith(suffix[, start[, end]])
Return True if S ends with the specified suffix, False otherwise.
expandtabs([tabsize])
Return a copy where all tab characters are expanded using spaces.
find(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
format(*args, **kwargs)
Return a formatted version of S, using substitutions from args and kwargs.
format_map(mapping)
Return a formatted version of S, using substitutions from mapping.
index(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
isalnum()
Return True if the string is an alpha-numeric string, False otherwise.
isalpha()
Return True if the string is an alphabetic string, False otherwise.
isascii()
Return True if all characters in the string are ASCII, False otherwise.
isdecimal()
Return True if the string is a decimal string, False otherwise.
isdigit() | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-1 | Return True if the string is a decimal string, False otherwise.
isdigit()
Return True if the string is a digit string, False otherwise.
isidentifier()
Return True if the string is a valid Python identifier, False otherwise.
islower()
Return True if the string is a lowercase string, False otherwise.
isnumeric()
Return True if the string is a numeric string, False otherwise.
isprintable()
Return True if the string is printable, False otherwise.
isspace()
Return True if the string is a whitespace string, False otherwise.
istitle()
Return True if the string is a title-cased string, False otherwise.
isupper()
Return True if the string is an uppercase string, False otherwise.
join(iterable, /)
Concatenate any number of strings.
ljust(width[, fillchar])
Return a left-justified string of length width.
lower()
Return a copy of the string converted to lowercase.
lstrip([chars])
Return a copy of the string with leading whitespace removed.
maketrans
Return a translation table usable for str.translate().
partition(sep, /)
Partition the string into three parts using the given separator.
removeprefix(prefix, /)
Return a str with the given prefix string removed if present.
removesuffix(suffix, /)
Return a str with the given suffix string removed if present.
replace(old, new[, count])
Return a copy with all occurrences of substring old replaced by new.
rfind(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rindex(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-2 | rjust(width[, fillchar])
Return a right-justified string of length width.
rpartition(sep, /)
Partition the string into three parts using the given separator.
rsplit([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
rstrip([chars])
Return a copy of the string with trailing whitespace removed.
split([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
splitlines([keepends])
Return a list of the lines in the string, breaking at line boundaries.
startswith(prefix[, start[, end]])
Return True if S starts with the specified prefix, False otherwise.
strip([chars])
Return a copy of the string with leading and trailing whitespace removed.
swapcase()
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()
Return a version of the string where each word is titlecased.
translate(table, /)
Replace each character in the string using the given translation table.
upper()
Return a copy of the string converted to uppercase.
zfill(width, /)
Pad a numeric string with zeros on the left, to fill a field of the given width.
Attributes
EQ
GT
GTE
LT
LTE
CONTAIN
LIKE
capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower
case.
casefold()¶
Return a version of the string suitable for caseless comparisons.
center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
count(sub[, start[, end]]) → int¶
Return the number of non-overlapping occurrences of substring sub in | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-3 | Return the number of non-overlapping occurrences of substring sub in
string S[start:end]. Optional arguments start and end are
interpreted as in slice notation.
encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
encodingThe encoding in which to encode the string.
errorsThe error handling scheme to use for encoding errors.
The default is ‘strict’ meaning that encoding errors raise a
UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and
‘xmlcharrefreplace’ as well as any other name registered with
codecs.register_error that can handle UnicodeEncodeErrors.
endswith(suffix[, start[, end]]) → bool¶
Return True if S ends with the specified suffix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
suffix can also be a tuple of strings to try.
expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
find(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
format(*args, **kwargs) → str¶
Return a formatted version of S, using substitutions from args and kwargs.
The substitutions are identified by braces (‘{’ and ‘}’).
format_map(mapping) → str¶
Return a formatted version of S, using substitutions from mapping.
The substitutions are identified by braces (‘{’ and ‘}’).
index(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found, | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-4 | Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and
there is at least one character in the string.
isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there
is at least one character in the string.
isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F.
Empty string is ASCII too.
isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and
there is at least one character in the string.
isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there
is at least one character in the string.
isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier,
such as “def” or “class”.
islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and
there is at least one cased character in the string.
isnumeric()¶
Return True if the string is a numeric string, False otherwise. | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-5 | isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at
least one character in the string.
isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in
repr() or if it is empty.
isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there
is at least one character in the string.
istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only
follow uncased characters and lowercase characters only cased ones.
isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and
there is at least one cased character in the string.
join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string.
The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
lower()¶
Return a copy of the string converted to lowercase.
lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
static maketrans()¶
Return a translation table usable for str.translate(). | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-6 | static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode
ordinals (integers) or characters to Unicode ordinals, strings or None.
Character keys will be then converted to ordinals.
If there are two arguments, they must be strings of equal length, and
in the resulting dictionary, each character in x will be mapped to the
character at the same position in y. If there is a third argument, it
must be a string, whose characters will be mapped to None in the result.
partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found,
returns a 3-tuple containing the part before the separator, the separator
itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string
and two empty strings.
removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):].
Otherwise, return a copy of the original string.
removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty,
return string[:-len(suffix)]. Otherwise, return a copy of the original
string.
replace(old, new, count=- 1, /)¶
Return a copy with all occurrences of substring old replaced by new.
countMaximum number of occurrences to replace.
-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are
replaced.
rfind(sub[, start[, end]]) → int¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-7 | replaced.
rfind(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
rindex(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If
the separator is found, returns a 3-tuple containing the part before the
separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings
and the original string.
rsplit(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed. | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-8 | rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
split(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Note, str.split() is mainly useful for data that has been intentionally
delimited. With natural text that includes punctuation, consider using
the regular expression module.
splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and
true.
startswith(prefix[, start[, end]]) → bool¶
Return True if S starts with the specified prefix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
prefix can also be a tuple of strings to try.
strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining
cased characters have lower case.
translate(table, /)¶
Replace each character in the string using the given translation table. | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
8a066912c231-9 | translate(table, /)¶
Replace each character in the string using the given translation table.
tableTranslation table, which must be a mapping of Unicode ordinals to
Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a
dictionary or list. If this operation raises LookupError, the character is
left untouched. Characters mapped to None are deleted.
upper()¶
Return a copy of the string converted to uppercase.
zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
CONTAIN = 'contain'¶
EQ = 'eq'¶
GT = 'gt'¶
GTE = 'gte'¶
LIKE = 'like'¶
LT = 'lt'¶
LTE = 'lte'¶ | https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.ir.Comparator.html |
1263008bc5ce-0 | langchain_experimental.plan_and_execute.schema.Step¶
class langchain_experimental.plan_and_execute.schema.Step(*, value: str)[source]¶
Bases: BaseModel
Step.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param value: str [Required]¶
The value. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.Step.html |
1fb44538177c-0 | langchain_experimental.plan_and_execute.schema.ListStepContainer¶
class langchain_experimental.plan_and_execute.schema.ListStepContainer(*, steps: List[Tuple[Step, StepResponse]] = None)[source]¶
Bases: BaseStepContainer
List step container.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param steps: List[Tuple[langchain_experimental.plan_and_execute.schema.Step, langchain_experimental.plan_and_execute.schema.StepResponse]] [Optional]¶
The steps.
add_step(step: Step, step_response: StepResponse) → None[source]¶
Add step and step response to the container.
get_final_response() → str[source]¶
Return the final response based on steps taken.
get_steps() → List[Tuple[Step, StepResponse]][source]¶ | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.ListStepContainer.html |
929cb8758cc7-0 | langchain_experimental.plan_and_execute.schema.BaseStepContainer¶
class langchain_experimental.plan_and_execute.schema.BaseStepContainer[source]¶
Bases: BaseModel
Base step container.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract add_step(step: Step, step_response: StepResponse) → None[source]¶
Add step and step response to the container.
abstract get_final_response() → str[source]¶
Return the final response based on steps taken. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.BaseStepContainer.html |
8561361f8de2-0 | langchain_experimental.plan_and_execute.schema.PlanOutputParser¶
class langchain_experimental.plan_and_execute.schema.PlanOutputParser[source]¶
Bases: BaseOutputParser
Plan output parser.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
invoke(input: str | langchain.schema.messages.BaseMessage, config: langchain.schema.runnable.RunnableConfig | None = None) → T¶
abstract parse(text: str) → Plan[source]¶
Parse into a plan.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.PlanOutputParser.html |
8561361f8de2-1 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.schema.PlanOutputParser.html |
febbab4be41b-0 | langchain_experimental.plan_and_execute.executors.base.BaseExecutor¶
class langchain_experimental.plan_and_execute.executors.base.BaseExecutor[source]¶
Bases: BaseModel
Base executor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take async step.
abstract step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take step. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.BaseExecutor.html |
4b81831f1e88-0 | langchain_experimental.plan_and_execute.executors.agent_executor.load_agent_executor¶
langchain_experimental.plan_and_execute.executors.agent_executor.load_agent_executor(llm: BaseLanguageModel, tools: List[BaseTool], verbose: bool = False, include_task_in_prompt: bool = False) → ChainExecutor[source]¶
Load an agent executor.
Parameters
llm – BaseLanguageModel
tools – List[BaseTool]
verbose – bool. Defaults to False.
include_task_in_prompt – bool. Defaults to False.
Returns
ChainExecutor | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.agent_executor.load_agent_executor.html |
c14408d58025-0 | langchain_experimental.plan_and_execute.executors.base.ChainExecutor¶
class langchain_experimental.plan_and_execute.executors.base.ChainExecutor(*, chain: Chain)[source]¶
Bases: BaseExecutor
Chain executor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param chain: langchain.chains.base.Chain [Required]¶
The chain to use.
async astep(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take step.
step(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → StepResponse[source]¶
Take step. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.executors.base.ChainExecutor.html |
d3ff2274cadb-0 | langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute¶
class langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, planner: BasePlanner, executor: BaseExecutor, step_container: BaseStepContainer = None, input_key: str = 'input', output_key: str = 'output')[source]¶
Bases: Chain
Plan and execute a chain of steps.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param executor: langchain_experimental.plan_and_execute.executors.base.BaseExecutor [Required]¶
The executor to use.
param input_key: str = 'input'¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-1 | There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'output'¶
param planner: langchain_experimental.plan_and_execute.planners.base.BasePlanner [Required]¶
The planner to use.
param step_container: langchain_experimental.plan_and_execute.schema.BaseStepContainer [Optional]¶
The step container to use.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-2 | Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-3 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-4 | addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-5 | Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-6 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Keys expected to be in the chain input.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
d3ff2274cadb-7 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Keys expected to be in the chain output.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.agent_executor.PlanAndExecute.html |
7a0d73246710-0 | langchain_experimental.plan_and_execute.planners.chat_planner.load_chat_planner¶
langchain_experimental.plan_and_execute.planners.chat_planner.load_chat_planner(llm: BaseLanguageModel, system_prompt: str = "Let's first understand the problem and devise a plan to solve the problem. Please output the plan starting with the header 'Plan:' and then followed by a numbered list of steps. Please make the plan the minimum number of steps required to accurately complete the task. If the task is a question, the final step should almost always be 'Given the above steps taken, please respond to the users original question'. At the end of your plan, say '<END_OF_PLAN>'") → LLMPlanner[source]¶
Load a chat planner.
Parameters
llm – Language model.
system_prompt – System prompt.
Returns
LLMPlanner | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.chat_planner.load_chat_planner.html |
8ba1855015ac-0 | langchain_experimental.plan_and_execute.planners.base.LLMPlanner¶
class langchain_experimental.plan_and_execute.planners.base.LLMPlanner(*, llm_chain: LLMChain, output_parser: PlanOutputParser, stop: Optional[List] = None)[source]¶
Bases: BasePlanner
LLM planner.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
The LLM chain to use.
param output_parser: langchain_experimental.plan_and_execute.schema.PlanOutputParser [Required]¶
The output parser to use.
param stop: Optional[List] = None¶
The stop list to use.
async aplan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Plan[source]¶
Given input, asynchronously decide what to do.
plan(inputs: dict, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Plan[source]¶
Given input, decide what to do. | https://api.python.langchain.com/en/latest/plan_and_execute/langchain_experimental.plan_and_execute.planners.base.LLMPlanner.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.