id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
2af3c533ff41-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BaseChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
2af3c533ff41-1 | file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt tem... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
2af3c533ff41-2 | A list of the names of the variables the prompt template expects.
field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-str... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
2af3c533ff41-3 | field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwa... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
2af3c533ff41-4 | Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwarg... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
2af3c533ff41-5 | Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) → langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last ... | https://python.langchain.com/en/latest/reference/modules/prompts.html |
4ccf7c946f09-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → List[str][source]#
Parse the output of an LLM call... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
4ccf7c946f09-1 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instruc... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
4ccf7c946f09-2 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field ... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
4ccf7c946f09-3 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], outp... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
4ccf7c946f09-4 | that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
4ccf7c946f09-5 | The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[sourc... | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
ba4e65a05714-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_api_answer_prompt » all fi... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-1 | field requests_wrapper: TextRequestsWrapper [Required]#
classmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-2 | pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
raise_deprecation » all fields
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.te... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-3 | llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-4 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'i... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-5 | as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-6 | consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-7 | which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that ma... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-8 | has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian phys... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-9 | following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-10 | Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If t... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-11 | harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn d... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-12 | sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request'... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-13 | model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anythi... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-14 | Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-15 | law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your train... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-16 | and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please c... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-17 | to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-18 | Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.C... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-19 | field retriever: BaseRetriever [Required]#
Index to connect to.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-20 | property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.GraphCypherQAChain[source]#
Chain for question-answering against a graph by generating Cypher statements.
Validators
raise_deprecation » all fields
set_verbose ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-21 | field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-22 | set_verbose » verbose
field entity_extraction_chain: LLMChain [Required]#
field graph: NetworkxEntityGraph [Required]#
field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-23 | pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Validators
raise_deprecation » all fields
set_verbose » verbose
field base_embeddings: Embeddings [Required]#
field llm_chain: LLMChain [Required... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-24 | field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no nee... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-25 | [Deprecated]
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-26 | field prompt: BasePromptTemplate [Required]#
Prompt object to use.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-27 | Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackMa... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-28 | Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]][source]#
Call predict and then parse the ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-29 | [Deprecated]
field list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', vali... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-30 | [Deprecated] Prompt to use when questioning the documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-31 | Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-32 | [Deprecated] Prompt to use to translate to python if necessary.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expres... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-33 | field requests_wrapper: TextRequestsWrapper [Optional]#
field text_length: int = 8000#
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-34 | [Deprecated]
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-35 | Maximum number of times to check the assertions. Default to double-checking.
field revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-36 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bull... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-37 | are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be complet... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-38 | The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-39 | pydantic model langchain.chains.MapReduceChain[source]#
Map-reduce chain.
Validators
raise_deprecation » all fields
set_verbose » verbose
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine documents.
field text_splitter: TextSplitter [Required]#
Text splitter to use.
classmetho... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-40 | Chain interacts with an OpenAPI endpoint using natural language.
Validators
raise_deprecation » all fields
set_verbose » verbose
field api_operation: APIOperation [Required]#
field api_request_chain: LLMChain [Required]#
field api_response_chain: Optional[LLMChain] = None#
field param_mapping: _ParamMapping [Required]#... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-41 | set_verbose » verbose
field get_answer_expr: str = 'print(solution())'#
field llm: Optional[BaseLanguageModel] = None#
[Deprecated]
field llm_chain: LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-42 | field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\n """Olivia has $23. She bought five bagels for $... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-43 | end of wednesday?"""\n golf_balls_initial = 58\n golf_balls_lost_tuesday = 23\n golf_balls_lost_wednesday = 2\n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\n result = golf_balls_left\n return result\n\n\n\n\n\nQ: There were nine computers in the server ro... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-44 | toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\n """Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?"""\n toys_initial = 5\n mom_toys = 2\n dad_toys = ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-45 | = 12\n denny_lollipops = jason_lollipops_initial - jason_lollipops_after\n result = denny_lollipops\n return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\n """Leah had 32 chocolate... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-46 | and 2 more cars arrive, how many cars are in the parking lot?"""\n cars_initial = 3\n cars_arrived = 2\n total_cars = cars_initial + cars_arrived\n result = total_cars\n return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, th... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-47 | [Deprecated]
field python_globals: Optional[Dict[str, Any]] = None#
field python_locals: Optional[Dict[str, Any]] = None#
field return_intermediate_steps: bool = False#
field stop: str = '\n\n'#
classmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-48 | Chain for question-answering against an index.
Example
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
from langchain.vectorstores.base import VectorStoreRetriever
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
retrievalQA = RetrievalQA.from_llm(llm... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-49 | [Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: Optional[BasePromptTemplate] = None#
[Deprecated] Prompt to use to translate natural language to SQL.
field query_checker_prompt: Optional[BasePromptTemplate] = None#
The prompt template that should be used by the query checker
field r... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-50 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input ques... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-51 | SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], ... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-52 | Load the necessary chains.
pydantic model langchain.chains.SequentialChain[source]#
Chain where the outputs of one chain feed directly into next.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field input_variables... | https://python.langchain.com/en/latest/reference/modules/chains.html |
ba4e65a05714-53 | field vectorstore: VectorStore [Required]#
Vector Database to connect to.
pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]#
Question-answering with sources over a vector database.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_naming » all fields
field k: int = 4#
Number of r... | https://python.langchain.com/en/latest/reference/modules/chains.html |
64eaf8b74d74-0 | .rst
.pdf
Agents
Agents#
Interface for agents.
pydantic model langchain.agents.Agent[source]#
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called “agent_scratchpad” where the agent can put its
intermediary wor... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-1 | Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]][source]#
get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]#
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[l... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-2 | field early_stopping_method: str = 'force'#
field handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False#
field max_execution_time: Optional[float] = None#
field max_iterations: Optional[int] = 15#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-3 | SELF_ASK_WITH_SEARCH = 'self-ask-with-search'#
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'#
ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'#
pydantic model langchain.agents.BaseMultiActionAgent[source]#
Base Agent class.
abstract async aplan(intermediate_steps... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-4 | Return response when agent has been stopped due to max iterations.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_l... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-5 | get_allowed_tools() → Optional[List[str]][source]#
abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, l... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-6 | classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of ... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-7 | powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the follo... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-8 | 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]# | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-9 | Create prompt in the style of the zero shot agent.
Parameters
tools – List of tools the agent will have access to, used to format the
prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
ai_prefix – String to use before AI output.
human_prefix – String to use before h... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-10 | classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large la... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-11 | receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-12 | the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-13 | Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.ConversationalChatAgent[source]#
An agent designed to hold a conversation in addition to using tools.
field out... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-14 | classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide r... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-15 | classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, system_message: str = 'Assistant is a ... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-16 | it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific questi... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-17 | Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.LLMSingleActionAgent[source]#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field output_parser: l... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-18 | pydantic model langchain.agents.MRKLChain[source]#
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
Valida... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-19 | action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
pydantic model langchain.agents.ReActChain[source]#
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
Validators
raise_deprecation » all fields
set_verbose » ver... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-20 | field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-21 | classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-22 | Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.Tool[source]#
Tool that takes in function or coroutine directly.
field coroutine: Optional[Callable[[...], Awai... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-23 | field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchp... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-24 | Returns
A PromptTemplate with the template assembled from the pieces here.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.age... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-25 | langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-26 | use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin y... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-27 | 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, sh... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-28 | Construct a json agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-29 | langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web reque... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-30 | which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that ... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-31 | None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-32 | Construct a json agent from an LLM and tools.
langchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[Lis... | https://python.langchain.com/en/latest/reference/modules/agents.html |
64eaf8b74d74-33 | langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, pref... | https://python.langchain.com/en/latest/reference/modules/agents.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.