id
stringlengths
14
16
source
stringlengths
49
117
text
stringlengths
16
2.73k
8f4ec7b51027-54
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
2. Initializes the Qdrant database as an in-memory docstore by default (and overridable to a remote docstore) Adds the text embeddings to the Qdrant database This is intended to be a quick way to get started. Example from langchain import Qdrant from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbed...
8f4ec7b51027-55
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
similarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None) β†’ List[Tuple[Document, float]][source]# Return docs most similar to query. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter – Filter by metadata. Defaults to Non...
8f4ec7b51027-56
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None. keys (Optional[List[str]], optional) – Optional key values to use as ids. Defaults to None. batch_size (int, optional) – Batch size to use for writes. Defaults to 1000. Returns List of ids added to the vectorstore ...
8f4ec7b51027-57
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
.. rubric:: Example classmethod from_texts_return_keys(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['...
8f4ec7b51027-58
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
0.2. (to be considered a match. Defaults to) – similarity (Because the similarity calculation algorithm is based on cosine) – :param : :param the smaller the angle: :param the higher the similarity.: Returns A list of documents that are most similar to the query text, including the match score for each document. Retu...
8f4ec7b51027-59
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) β†’ langchain.vectorstores.sklearn.SKLearnVectorStore[source]# Return VectorStore initialized from texts and...
8f4ec7b51027-60
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
:param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. persist() β†’ None[source]# similarity_search(query: str, k: int = 4...
8f4ec7b51027-61
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
metadatas – Optional list of metadatas associated with the texts. kwargs – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. add_vectors(vectors: List[List[float]], documents: List[langchain.schema.Document]) β†’ List[str][source]# classmethod from_texts(texts: List[str], emb...
8f4ec7b51027-62
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
metadata jsonb, embedding vector(1536), similarity float) LANGUAGE plpgsql AS $$ # variable_conflict use_column BEGINRETURN query SELECT id, content, metadata, embedding, 1 -(docstore.embedding <=> query_embedding) AS similarity FROMdocstore ORDER BYdocstore.embedding <=> query_embedding LIMIT match_count; END; $$; ```...
8f4ec7b51027-63
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
List of Documents most similar to the query vector. similarity_search_by_vector_returning_embeddings(query: List[float], k: int) β†’ List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]][source]# similarity_search_by_vector_with_relevance_scores(query: List[float], k: int) β†’ List[Tuple[langchai...
8f4ec7b51027-64
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
Drop an existing index. Parameters index_name (str) – Name of the index to drop. Returns True if the index is dropped successfully. Return type bool classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name:...
8f4ec7b51027-65
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
class langchain.vectorstores.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]# Wrapper around Typesense vector search. To use, you should have the typesense python package installed. Example from langchain.embedding.openai imp...
8f4ec7b51027-66
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
classmethod from_client_params(embedding: langchain.embeddings.base.Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) β†’ langchain.vectorstores.typesense.Typesense[source]# Initializ...
8f4ec7b51027-67
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
similarity_search_with_score(query: str, k: int = 4, filter: Optional[str] = '') β†’ List[Tuple[langchain.schema.Document, float]][source]# Return typesense documents most similar to query, along with scores. Parameters query – Text to look up documents similar to. k – Number of Documents to return. Defaults to 4. filter...
8f4ec7b51027-68
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) β†’ langchain.vectorstores.vectara.Vectara[source]# Construct Vectara wrapper from raw documents. This is intended to be a quick way to get started. .. rubric::...
8f4ec7b51027-69
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
alpha – parameter for hybrid search (called β€œlambda” in Vectara documentation). filter – Dictionary of argument(s) to filter on metadata. For example a filter can be β€œdoc.rating > 3.0 and part.lang = β€˜deu’”} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. Returns List of Documents mo...
8f4ec7b51027-70
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
Returns List of ids from adding the texts into the vectorstore. async classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β†’ langchain.vectorstores.base.VST[source]# Return VectorStore initialized from documents and embeddings. async cla...
8f4ec7b51027-71
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β†’ List[langchain.schema.Document][source]# Return docs most similar to embedding vector. async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β†’ List[Tuple[langchain.schema.Document, float]][source]# Re...
8f4ec7b51027-72
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β†’ List[langchain.schema.Document][source]# Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among s...
8f4ec7b51027-73
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query – input text k – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the re...
8f4ec7b51027-74
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings, weaviat...
8f4ec7b51027-75
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
fetch_k – Number of Documents to fetch to pass to MMR algorithm. lambda_mult – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. similar...
8f4ec7b51027-76
https://python.langchain.com/en/latest/reference/modules/vectorstores.html
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = N...
2789b4cd54d7-0
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
.rst .pdf Document Compressors Document Compressors# pydantic model langchain.retrievers.document_compressors.CohereRerank[source]# field client: Client [Required]# field model: str = 'rerank-english-v2.0'# field top_n: int = 3# async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β†’ Seq...
2789b4cd54d7-1
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
field similarity_fn: Callable = <function cosine_similarity># Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. field similarity_threshold: Optional[float] = None# Threshold for ...
2789b4cd54d7-2
https://python.langchain.com/en/latest/reference/modules/document_compressors.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) β†’ langchain.retrievers.document_compressors.chain_extract.LLMChainE...
06bfa936903b-0
https://python.langchain.com/en/latest/reference/modules/retrievers.html
.rst .pdf Retrievers Retrievers# pydantic model langchain.retrievers.ArxivRetriever[source]# It is effectively a wrapper for ArxivAPIWrapper. It wraps load() to get_relevant_documents(). It uses all ArxivAPIWrapper arguments without any change. async aget_relevant_documents(query: str) β†’ List[langchain.schema.Document]...
06bfa936903b-1
https://python.langchain.com/en/latest/reference/modules/retrievers.html
Get documents relevant for a query. Parameters query – string to find relevant documents for Returns List of relevant documents pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]# field aiosession: Optional[aiohttp.client.ClientSession] = None# field bearer_token: str [Required]# field filter: Optional[...
06bfa936903b-2
https://python.langchain.com/en/latest/reference/modules/retrievers.html
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]# async aget_relevant_documents(query: str) β†’ List[langchain.schema.Document][source]# Get documents relevant for a query. Parameters query – string to find relevant documents for Returns...
06bfa936903b-3
https://python.langchain.com/en/latest/reference/modules/retrievers.html
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. add_texts(texts: Iterable[str], refresh_indices: bool = True) β†’ List[str][source]# Run more texts through the embeddings and add to the retriver. Parameters texts – Iterable of strings to add to the retriever. refresh_indices – bool to refresh Elastic...
06bfa936903b-4
https://python.langchain.com/en/latest/reference/modules/retrievers.html
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) β†’ langchain.retrievers.knn.KNNRetriever[source]# get_relevant_documents(query: str) β†’ List[langchain.schema.Document][source]# Get documents relevant for a query. Parameters query – string to find relevant document...
06bfa936903b-5
https://python.langchain.com/en/latest/reference/modules/retrievers.html
query – string to find relevant documents for Returns List of relevant documents pydantic model langchain.retrievers.PubMedRetriever[source]# It is effectively a wrapper for PubMedAPIWrapper. It wraps load() to get_relevant_documents(). It uses all PubMedAPIWrapper arguments without any change. async aget_relevant_docu...
06bfa936903b-6
https://python.langchain.com/en/latest/reference/modules/retrievers.html
field texts: List[str] [Required]# async aget_relevant_documents(query: str) β†’ List[langchain.schema.Document][source]# Get documents relevant for a query. Parameters query – string to find relevant documents for Returns List of relevant documents classmethod from_texts(texts: List[str], embeddings: langchain.embedding...
06bfa936903b-7
https://python.langchain.com/en/latest/reference/modules/retrievers.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = No...
06bfa936903b-8
https://python.langchain.com/en/latest/reference/modules/retrievers.html
Get documents relevant for a query. Parameters query – string to find relevant documents for Returns List of relevant documents pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]# Retriever combining embedding similarity with recency. field decay_rate: float = 0.01# The exponential decay facto...
06bfa936903b-9
https://python.langchain.com/en/latest/reference/modules/retrievers.html
get_salient_docs(query: str) β†’ Dict[int, Tuple[langchain.schema.Document, float]][source]# Return documents that are salient to the query. class langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]# async aget_relevant_documents(query: ...
06bfa936903b-10
https://python.langchain.com/en/latest/reference/modules/retrievers.html
get_relevant_documents(query: str) β†’ List[langchain.schema.Document][source]# Get documents relevant for a query. Parameters query – string to find relevant documents for Returns List of relevant documents get_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) β†’ List[langchain.schema.Document]...
06bfa936903b-11
https://python.langchain.com/en/latest/reference/modules/retrievers.html
query – string to find relevant documents for Returns List of relevant documents get_relevant_documents(query: str) β†’ List[langchain.schema.Document][source]# Get documents relevant for a query. Parameters query – string to find relevant documents for Returns List of relevant documents class langchain.retrievers.ZepRet...
e273f929c25e-0
https://python.langchain.com/en/latest/reference/modules/docstore.html
.rst .pdf Docstore Docstore# Wrappers on top of docstores. class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]# Simple in memory docstore in the form of a dict. add(texts: Dict[str, langchain.schema.Document]) β†’ None[source]# Add texts to in memory dictionary. search(search: s...
52eda67e6054-0
https://python.langchain.com/en/latest/reference/modules/prompts.html
.rst .pdf PromptTemplates PromptTemplates# Prompt template classes. pydantic model langchain.prompts.BaseChatPromptTemplate[source]# format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt....
52eda67e6054-1
https://python.langchain.com/en/latest/reference/modules/prompts.html
prompt.save(file_path=”path/prompt.yaml”) pydantic model langchain.prompts.ChatPromptTemplate[source]# format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format...
52eda67e6054-2
https://python.langchain.com/en/latest/reference/modules/prompts.html
field suffix: str [Required]# A prompt template string to put after the examples. field template_format: str = 'f-string'# The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) β†’ Dict[source]# Return...
52eda67e6054-3
https://python.langchain.com/en/latest/reference/modules/prompts.html
The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) β†’ Dict[source]# Return a dictionary of the prompt. format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – An...
52eda67e6054-4
https://python.langchain.com/en/latest/reference/modules/prompts.html
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) β†’ langchain.prompts.prompt.PromptTemplate[source]# Take examples in list format with prefix and suffix to create a prompt. Intended to be used as a way to dynamically...
52eda67e6054-5
https://python.langchain.com/en/latest/reference/modules/prompts.html
previous Prompts next Example Selector By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Jun 04, 2023.
55ba9a0aa8d3-0
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
.rst .pdf Output Parsers Output Parsers# pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]# Parse out comma separated lists. get_format_instructions() β†’ str[source]# Instructions on how the LLM output should be formatted. parse(text: str) β†’ List[str][source]# Parse the output of an LLM call...
55ba9a0aa8d3-1
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
pydantic model langchain.output_parsers.ListOutputParser[source]# Class to parse the output of an LLM call to a list. abstract parse(text: str) β†’ List[str][source]# Parse the output of an LLM call. pydantic model langchain.output_parsers.OutputFixingParser[source]# Wraps a parser and tries to fix parsing errors. field ...
55ba9a0aa8d3-2
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
pydantic model langchain.output_parsers.PydanticOutputParser[source]# field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]# get_format_instructions() β†’ str[source]# Instructions on how the LLM output should be formatted. parse(text: str) β†’ langchain.output_parsers.pydantic.T[source]# Parse the ou...
55ba9a0aa8d3-3
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
LLM, and telling it the completion did not satisfy criteria in the prompt. field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]# field retry_chain: langchain.chains.llm.LLMChain [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.sch...
55ba9a0aa8d3-4
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error tha...
55ba9a0aa8d3-5
https://python.langchain.com/en/latest/reference/modules/output_parsers.html
Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output py...
39c1624950ec-0
https://python.langchain.com/en/latest/reference/modules/chains.html
.rst .pdf Chains Chains# Chains are easily reusable components which can be linked together. pydantic model langchain.chains.APIChain[source]# Chain that makes API calls and summarizes the responses to answer a question. Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_api_answer_prompt Β» all fi...
39c1624950ec-1
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the belo...
39c1624950ec-2
https://python.langchain.com/en/latest/reference/modules/chains.html
raise_deprecation Β» all fields set_verbose Β» verbose field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]# field text_splitter: langchain.text_splitter.TextSplitter [Optional]# pydantic model langchain.chains.ChatVectorDBChain[source]# Chain for chatting with a vector d...
39c1624950ec-3
https://python.langchain.com/en/latest/reference/modules/chains.html
qa_chain = LLMChain(llm=llm, prompt=qa_prompt) constitutional_chain = ConstitutionalChain.from_llm( llm=llm, chain=qa_chain, constitutional_principles=[ ConstitutionalPrinciple( critique_request="Tell if this answer is good.", revision_request="Give a better answer.", ...
39c1624950ec-4
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'i...
39c1624950ec-5
https://python.langchain.com/en/latest/reference/modules/chains.html
responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model ...
39c1624950ec-6
https://python.langchain.com/en/latest/reference/modules/chains.html
the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to...
39c1624950ec-7
https://python.langchain.com/en/latest/reference/modules/chains.html
style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritiq...
39c1624950ec-8
https://python.langchain.com/en/latest/reference/modules/chains.html
human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious c...
39c1624950ec-9
https://python.langchain.com/en/latest/reference/modules/chains.html
{'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in r...
39c1624950ec-10
https://python.langchain.com/en/latest/reference/modules/chains.html
solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun...
39c1624950ec-11
https://python.langchain.com/en/latest/reference/modules/chains.html
partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {cri...
39c1624950ec-12
https://python.langchain.com/en/latest/reference/modules/chains.html
Create a chain from an LLM. classmethod get_principles(names: Optional[List[str]] = None) β†’ List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]# property input_keys: List[str]# Defines the input keys. property output_keys: List[str]# Defines the output keys. pydantic model langchain.chains.C...
39c1624950ec-13
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conve...
39c1624950ec-14
https://python.langchain.com/en/latest/reference/modules/chains.html
property output_keys: List[str]# Output keys this chain expects. pydantic model langchain.chains.GraphCypherQAChain[source]# Chain for question-answering against a graph by generating Cypher statements. Validators raise_deprecation Β» all fields set_verbose Β» verbose field cypher_generation_chain: LLMChain [Required]# f...
39c1624950ec-15
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe i...
39c1624950ec-16
https://python.langchain.com/en/latest/reference/modules/chains.html
field qa_chain: LLMChain [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the ...
39c1624950ec-17
https://python.langchain.com/en/latest/reference/modules/chains.html
set_verbose Β» verbose field base_embeddings: Embeddings [Required]# field llm_chain: LLMChain [Required]# combine_embeddings(embeddings: List[List[float]]) β†’ List[float][source]# Combine embeddings into final embeddings. embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call the base embeddings. embed_quer...
39c1624950ec-18
https://python.langchain.com/en/latest/reference/modules/chains.html
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer....
39c1624950ec-19
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash comm...
39c1624950ec-20
https://python.langchain.com/en/latest/reference/modules/chains.html
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β†’ List[Dict[str, str]][source]# Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], ca...
39c1624950ec-21
https://python.langchain.com/en/latest/reference/modules/chains.html
completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β†’ Union[str, List[str], Dict[str, str]][source]# Call apredict and then parse the results. async aprep_...
39c1624950ec-22
https://python.langchain.com/en/latest/reference/modules/chains.html
Call predict and then parse the results. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) β†’ Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]# Prepare prompts from inputs. pydantic model langchain.chains.LLMCheckerC...
39c1624950ec-23
https://python.langchain.com/en/latest/reference/modules/chains.html
field revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format...
39c1624950ec-24
https://python.langchain.com/en/latest/reference/modules/chains.html
pydantic model langchain.chains.LLMMathChain[source]# Chain that interprets a prompt and executes python code to do math. Example from langchain import LLMMathChain, OpenAI llm_math = LLMMathChain.from_llm(OpenAI()) Validators raise_deprecation Β» all fields raise_deprecation Β» all fields set_verbose Β» verbose field llm...
39c1624950ec-25
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use t...
39c1624950ec-26
https://python.langchain.com/en/latest/reference/modules/chains.html
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]# Chain for question-answering with self-verification. Example from langchain import OpenAI, LLMSummarizationCheckerChain llm = OpenAI(temperature=0.0) checker_chain = LLMSummarizationCheckerChain.from_llm(llm) Validators raise_deprecation Β» all fields...
39c1624950ec-27
https://python.langchain.com/en/latest/reference/modules/chains.html
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assert...
39c1624950ec-28
https://python.langchain.com/en/latest/reference/modules/chains.html
field revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\n...
39c1624950ec-29
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bull...
39c1624950ec-30
https://python.langchain.com/en/latest/reference/modules/chains.html
PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some exa...
39c1624950ec-31
https://python.langchain.com/en/latest/reference/modules/chains.html
pydantic model langchain.chains.MapReduceChain[source]# Map-reduce chain. Validators raise_deprecation Β» all fields set_verbose Β» verbose field combine_documents_chain: BaseCombineDocumentsChain [Required]# Chain to use to combine documents. field text_splitter: TextSplitter [Required]# Text splitter to use. classmetho...
39c1624950ec-32
https://python.langchain.com/en/latest/reference/modules/chains.html
pydantic model langchain.chains.OpenAPIEndpointChain[source]# Chain interacts with an OpenAPI endpoint using natural language. Validators raise_deprecation Β» all fields set_verbose Β» verbose field api_operation: APIOperation [Required]# field api_request_chain: LLMChain [Required]# field api_response_chain: Optional[LL...
39c1624950ec-33
https://python.langchain.com/en/latest/reference/modules/chains.html
set_verbose Β» verbose field get_answer_expr: str = 'print(solution())'# field llm: Optional[BaseLanguageModel] = None# [Deprecated] field llm_chain: LLMChain [Required]#
39c1624950ec-34
https://python.langchain.com/en/latest/reference/modules/chains.html
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β  """Olivia has $23. She bought five bagels for $...
39c1624950ec-35
https://python.langchain.com/en/latest/reference/modules/chains.html
each day, from monday to thursday. How many computers are now in the server room?"""\nΒ Β Β  computers_initial = 9\nΒ Β Β  computers_per_day = 5\nΒ Β Β  num_days = 4Β  # 4 days between monday and thursday\nΒ Β Β  computers_added = computers_per_day * num_days\nΒ Β Β  computers_total = computers_initial + computers_added\nΒ Β Β  result = ...
39c1624950ec-36
https://python.langchain.com/en/latest/reference/modules/chains.html
result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β  """Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?"""\nΒ Β Β  leah_chocolates = 32\nΒ Β Β ...
39c1624950ec-37
https://python.langchain.com/en/latest/reference/modules/chains.html
trees_added = trees_after - trees_initial\nΒ Β Β  result = trees_added\nΒ Β Β  return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n', template_format='f-string', validate_template=True)#
39c1624950ec-38
https://python.langchain.com/en/latest/reference/modules/chains.html
[Deprecated] field python_globals: Optional[Dict[str, Any]] = None# field python_locals: Optional[Dict[str, Any]] = None# field return_intermediate_steps: bool = False# field stop: str = '\n\n'# classmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) β†’ langchain.chains.pal....
39c1624950ec-39
https://python.langchain.com/en/latest/reference/modules/chains.html
from langchain.llms import OpenAI from langchain.chains import RetrievalQA from langchain.faiss import FAISS from langchain.vectorstores.base import VectorStoreRetriever retriever = VectorStoreRetriever(vectorstore=FAISS(...)) retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever) Validators raise_deprec...
39c1624950ec-40
https://python.langchain.com/en/latest/reference/modules/chains.html
field prompt: Optional[BasePromptTemplate] = None# [Deprecated] Prompt to use to translate natural language to SQL. field query_checker_prompt: Optional[BasePromptTemplate] = None# The prompt template that should be used by the query checker field return_direct: bool = False# Whether or not to return the result of quer...
39c1624950ec-41
https://python.langchain.com/en/latest/reference/modules/chains.html
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input ques...
39c1624950ec-42
https://python.langchain.com/en/latest/reference/modules/chains.html
langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]#
39c1624950ec-43
https://python.langchain.com/en/latest/reference/modules/chains.html
Load the necessary chains. pydantic model langchain.chains.SequentialChain[source]# Chain where the outputs of one chain feed directly into next. Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_chains Β» all fields field chains: List[langchain.chains.base.Chain] [Required]# field input_variables...
39c1624950ec-44
https://python.langchain.com/en/latest/reference/modules/chains.html
Vector Database to connect to. pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]# Question-answering with sources over a vector database. Validators raise_deprecation Β» all fields set_verbose Β» verbose validate_naming Β» all fields field k: int = 4# Number of results to return from store field max_token...
05002a569f65-0
https://python.langchain.com/en/latest/reference/modules/agents.html
.rst .pdf Agents Agents# Interface for agents. pydantic model langchain.agents.Agent[source]# Class responsible for calling the language model and deciding the action. This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called β€œagent_scratchpad” where the agent can put its intermediary wor...
05002a569f65-1
https://python.langchain.com/en/latest/reference/modules/agents.html
get_allowed_tools() β†’ Optional[List[str]][source]# get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) β†’ Dict[str, Any][source]# Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callba...
05002a569f65-2
https://python.langchain.com/en/latest/reference/modules/agents.html
field handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]# classmethod from_agent_and_tools(agent: Un...
05002a569f65-3
https://python.langchain.com/en/latest/reference/modules/agents.html
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'# ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'# pydantic model langchain.agents.BaseMultiActionAgent[source]# Base Agent class. abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]...