id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
81b746c66a31-90
}, "table": { "title": "Table", "default": "langchain", "env_names": "{'myscale_table'}", "type": "string" }, "metric": { "title": "Metric", "default": "cosine", "env_names": "{'myscale_metric'}", "type": "string" } }, "additionalProperties": false } Config env_file: str = .env env_file_encoding: str = utf-8 env_prefix: str = myscale_ Fields column_map (Dict[str, str]) database (str) host (str) index_param (Optional[Dict[str, str]]) index_type (str) metric (str) password (Optional[str]) port (int) table (str) username (Optional[str]) attribute column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'} attribute database: str = 'default' attribute host: str = 'localhost' attribute index_param: Optional[Dict[str, str]] = None attribute index_type: str = 'IVFFLAT' attribute metric: str = 'cosine' attribute password: Optional[str] = None attribute port: int = 8443 attribute table: str = 'langchain' attribute username: Optional[str] = None class langchain.vectorstores.Pinecone(index, embedding_function, text_key, namespace=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Pinecone vector database.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-91
Bases: langchain.vectorstores.base.VectorStore Wrapper around Pinecone vector database. To use, you should have the pinecone-client python package installed. Example from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") Parameters index (Any) – embedding_function (Callable) – text_key (str) – namespace (Optional[str]) – add_texts(texts, metadatas=None, ids=None, namespace=None, batch_size=32, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. namespace (Optional[str]) – Optional pinecone namespace to add the texts to. batch_size (int) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=4, filter=None, namespace=None)[source] Return pinecone documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-92
k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – Dictionary of argument(s) to filter on metadata namespace (Optional[str]) – Namespace to search in. Default will search in β€˜β€™ namespace. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, filter=None, namespace=None, **kwargs)[source] Return pinecone documents most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – Dictionary of argument(s) to filter on metadata namespace (Optional[str]) – Namespace to search in. Default will search in β€˜β€™ namespace. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[dict]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-93
Defaults to 0.5. filter (Optional[dict]) – namespace (Optional[str]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[dict]) – namespace (Optional[str]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, batch_size=32, text_key='text', index_name=None, namespace=None, **kwargs)[source] Construct Pinecone wrapper from raw documents. This is a user friendly interface that: Embeds documents. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started. Example from langchain import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-94
# in your Pinecone console pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts( texts, embeddings, index_name="langchain-demo" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – batch_size (int) – text_key (str) – index_name (Optional[str]) – namespace (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.pinecone.Pinecone classmethod from_existing_index(index_name, embedding, text_key='text', namespace=None)[source] Load pinecone vectorstore from index name. Parameters index_name (str) – embedding (langchain.embeddings.base.Embeddings) – text_key (str) – namespace (Optional[str]) – Return type langchain.vectorstores.pinecone.Pinecone delete(ids, namespace=None)[source] Delete by vector IDs. :param ids: List of ids to delete. Parameters ids (List[str]) – namespace (Optional[str]) – Return type None class langchain.vectorstores.Qdrant(client, collection_name, embeddings=None, content_payload_key='page_content', metadata_payload_key='metadata', embedding_function=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Qdrant vector database. To use you should have the qdrant-client package installed. Example from qdrant_client import QdrantClient from langchain import Qdrant client = QdrantClient() collection_name = "MyCollection"
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-95
client = QdrantClient() collection_name = "MyCollection" qdrant = Qdrant(client, collection_name, embedding_function) Parameters client (Any) – collection_name (str) – embeddings (Optional[Embeddings]) – content_payload_key (str) – metadata_payload_key (str) – embedding_function (Optional[Callable]) – CONTENT_KEY = 'page_content' METADATA_KEY = 'metadata' add_texts(texts, metadatas=None, ids=None, batch_size=64, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[Sequence[str]]) – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. batch_size (int) – How many vectors upload per-request. Default: 64 kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-96
May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of Documents most similar to the query. Return type List[Document] similarity_search_with_score(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-97
If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] similarity_search_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-98
Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of Documents most similar to the query. Return type List[Document] similarity_search_with_score_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-99
E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-100
Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, location=None, url=None, port=6333, grpc_port=6334, prefer_grpc=False, https=None, api_key=None, prefix=None, timeout=None, host=None, path=None, collection_name=None, distance_func='Cosine', content_payload_key='page_content', metadata_payload_key='metadata', batch_size=64, shard_number=None, replication_factor=None, write_consistency_factor=None, on_disk_payload=None, hnsw_config=None, optimizers_config=None, wal_config=None, quantization_config=None, init_from=None, **kwargs)[source] Construct Qdrant wrapper from a list of texts. Parameters texts (List[str]) – A list of texts to be indexed in Qdrant. embedding (Embeddings) – A subclass of Embeddings, responsible for text vectorization. metadatas (Optional[List[dict]]) – An optional list of metadata. If provided it has to be of the same length as a list of texts. ids (Optional[Sequence[str]]) – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. location (Optional[str]) – If :memory: - use in-memory Qdrant instance. If str - use it as a url parameter. If None - fallback to relying on host and port parameters. url (Optional[str]) – either host or str of β€œOptional[scheme], host, Optional[port], Optional[prefix]”. Default: None port (Optional[int]) – Port of the REST API interface. Default: 6333 grpc_port (int) – Port of the gRPC interface. Default: 6334 prefer_grpc (bool) – If true - use gPRC interface whenever possible in custom methods. Default: False
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-101
Default: False https (Optional[bool]) – If true - use HTTPS(SSL) protocol. Default: None api_key (Optional[str]) – API key for authentication in Qdrant Cloud. Default: None prefix (Optional[str]) – If not None - add prefix to the REST URL path. Example: service/v1 will result in http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. Default: None timeout (Optional[float]) – Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC host (Optional[str]) – Host name of Qdrant service. If url and host are None, set to β€˜localhost’. Default: None path (Optional[str]) – Path in which the vectors will be stored while using local mode. Default: None collection_name (Optional[str]) – Name of the Qdrant collection to be used. If not provided, it will be created randomly. Default: None distance_func (str) – Distance function. One of: β€œCosine” / β€œEuclid” / β€œDot”. Default: β€œCosine” content_payload_key (str) – A payload key used to store the content of the document. Default: β€œpage_content” metadata_payload_key (str) – A payload key used to store the metadata of the document. Default: β€œmetadata” batch_size (int) – How many vectors upload per-request. Default: 64 shard_number (Optional[int]) – Number of shards in collection. Default is 1, minimum is 1. replication_factor (Optional[int]) – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-102
Defines how many copies of each shard will be created. Have effect only in distributed mode. write_consistency_factor (Optional[int]) – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode. on_disk_payload (Optional[bool]) – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM. hnsw_config (Optional[common_types.HnswConfigDiff]) – Params for HNSW index optimizers_config (Optional[common_types.OptimizersConfigDiff]) – Params for optimizer wal_config (Optional[common_types.WalConfigDiff]) – Params for Write-Ahead-Log quantization_config (Optional[common_types.QuantizationConfig]) – Params for quantization, if None - quantization will be disabled init_from (Optional[common_types.InitFrom]) – Use data stored in another collection to initialize this collection **kwargs – Additional arguments passed directly into REST client initialization kwargs (Any) – Return type Qdrant This is a user-friendly interface that: 1. Creates embeddings, one for each text 2. Initializes the Qdrant database as an in-memory docstore by default (and overridable to a remote docstore) Adds the text embeddings to the Qdrant database This is intended to be a quick way to get started. Example
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-103
This is intended to be a quick way to get started. Example from langchain import Qdrant from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() qdrant = Qdrant.from_texts(texts, embeddings, "localhost") class langchain.vectorstores.Redis(redis_url, index_name, embedding_function, content_key='content', metadata_key='metadata', vector_key='content_vector', relevance_score_fn=<function _default_relevance_score>, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Redis vector database. To use, you should have the redis python package installed. Example from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Redis( redis_url="redis://username:password@localhost:6379" index_name="my-index", embedding_function=embeddings.embed_query, ) Parameters redis_url (str) – index_name (str) – embedding_function (Callable) – content_key (str) – metadata_key (str) – vector_key (str) – relevance_score_fn (Optional[Callable[[float], float]]) – kwargs (Any) – add_texts(texts, metadatas=None, embeddings=None, batch_size=1000, **kwargs)[source] Add more texts to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-104
embeddings. Defaults to None. keys (List[str]) or ids (List[str]) – Identifiers of entries. Defaults to None. batch_size (int, optional) – Batch size to use for writes. Defaults to 1000. kwargs (Any) – Returns List of ids added to the vectorstore Return type List[str] similarity_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] similarity_search_limit_score(query, k=4, score_threshold=0.2, **kwargs)[source] Returns the most similar indexed documents to the query text within the score_threshold range. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. score_threshold (float) – The minimum matching score required for a document 0.2. (to be considered a match. Defaults to) – similarity (Because the similarity calculation algorithm is based on cosine) – kwargs (Any) – Return type List[langchain.schema.Document] :param : :param the smaller the angle: :param the higher the similarity.: Returns A list of documents that are most similar to the query text, including the match score for each document. Return type List[Document] Parameters query (str) – k (int) – score_threshold (float) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-105
k (int) – score_threshold (float) – kwargs (Any) – Note If there are no documents that satisfy the score_threshold value, an empty list is returned. similarity_search_with_score(query, k=4)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts_return_keys(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', distance_metric='COSINE', **kwargs)[source] Create a Redis vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in Redis. Adds the documents to the newly created Redis index. Returns the keys of the newly created documents. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (Optional[str]) – content_key (str) – metadata_key (str) – vector_key (str) – distance_metric (Literal['COSINE', 'IP', 'L2']) – kwargs (Any) – Return type Tuple[langchain.vectorstores.redis.Redis, List[str]]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-106
Return type Tuple[langchain.vectorstores.redis.Redis, List[str]] classmethod from_texts(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source] Create a Redis vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in Redis. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (Optional[str]) – content_key (str) – metadata_key (str) – vector_key (str) – kwargs (Any) – Return type langchain.vectorstores.redis.Redis static delete(ids, **kwargs)[source] Delete a Redis entry. Parameters ids (List[str]) – List of ids (keys) to delete. kwargs (Any) – Returns Whether or not the deletions were successful. Return type bool static drop_index(index_name, delete_documents, **kwargs)[source] Drop a Redis search index. Parameters index_name (str) – Name of the index to drop. delete_documents (bool) – Whether to drop the associated documents. kwargs (Any) – Returns Whether or not the drop was successful. Return type bool classmethod from_existing_index(embedding, index_name, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source] Connect to an existing Redis index. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-107
Connect to an existing Redis index. Parameters embedding (langchain.embeddings.base.Embeddings) – index_name (str) – content_key (str) – metadata_key (str) – vector_key (str) – kwargs (Any) – Return type langchain.vectorstores.redis.Redis as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.redis.RedisVectorStoreRetriever class langchain.vectorstores.Rockset(client, embeddings, collection_name, text_key, embedding_key)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper arpund Rockset vector database. To use, you should have the rockset python package installed. Note that to use this, the collection being used must already exist in your Rockset instance. You must also ensure you use a Rockset ingest transformation to apply VECTOR_ENFORCE on the column being used to store embedding_key in the collection. See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details Everything below assumes commons Rockset workspace. TODO: Add support for workspace args. Example from langchain.vectorstores import Rockset from langchain.embeddings.openai import OpenAIEmbeddings import rockset # Make sure you use the right host (region) for your Rockset instance # and APIKEY has both read-write access to your collection. rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key="***") collection_name = "langchain_demo" embeddings = OpenAIEmbeddings() vectorstore = Rockset(rs, collection_name, embeddings, "description", "description_embedding") Parameters client (Any) – embeddings (Embeddings) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-108
Parameters client (Any) – embeddings (Embeddings) – collection_name (str) – text_key (str) – embedding_key (str) – add_texts(texts, metadatas=None, ids=None, batch_size=32, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. batch_size: Send documents in batches to rockset. Returns List of ids from adding the texts into the vectorstore. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – batch_size (int) – kwargs (Any) – Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, client=None, collection_name='', text_key='', embedding_key='', ids=None, batch_size=32, **kwargs)[source] Create Rockset wrapper with existing texts. This is intended as a quicker way to get started. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – client (Any) – collection_name (str) – text_key (str) – embedding_key (str) – ids (Optional[List[str]]) – batch_size (int) – kwargs (Any) – Return type langchain.vectorstores.rocksetdb.Rockset
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-109
Return type langchain.vectorstores.rocksetdb.Rockset class DistanceFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source] Bases: enum.Enum COSINE_SIM = 'COSINE_SIM' EUCLIDEAN_DIST = 'EUCLIDEAN_DIST' DOT_PRODUCT = 'DOT_PRODUCT' order_by()[source] Return type str similarity_search_with_relevance_scores(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Perform a similarity search with Rockset Parameters query (str) – Text to look up documents similar to. distance_func (DistanceFunction) – how to compute distance between two vectors in Rockset. k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – Metadata filters supplied as a SQL where condition string. Defaults to None. eg. β€œprice<=70.0 AND brand=’Nintendo’” NOTE – Please do not let end-user to fill this and always be aware of SQL injection. kwargs (Any) – Returns List of documents with their relevance score Return type List[Tuple[Document, float]] similarity_search(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Same as similarity_search_with_relevance_scores but doesn’t return the scores. Parameters query (str) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-110
kwargs (Any) – Return type List[Document] similarity_search_by_vector(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Accepts a query_embedding (vector), and returns documents with similar embeddings. Parameters embedding (List[float]) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Document] similarity_search_by_vector_with_relevance_scores(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Accepts a query_embedding (vector), and returns documents with similar embeddings along with their relevance scores. Parameters embedding (List[float]) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Tuple[Document, float]] delete_texts(ids)[source] Delete a list of docs from the Rockset collection Parameters ids (List[str]) – Return type None class langchain.vectorstores.SKLearnVectorStore(embedding, *, persist_path=None, serializer='json', metric='cosine', **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore A simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation. Parameters embedding (langchain.embeddings.base.Embeddings) – persist_path (Optional[str]) – serializer (Literal['json', 'bson', 'parquet']) – metric (str) – kwargs (Any) – Return type None persist()[source] Return type
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-111
Return type None persist()[source] Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, *, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-112
to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, persist_path=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – persist_path (Optional[str]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-113
persist_path (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.sklearn.SKLearnVectorStore class langchain.vectorstores.StarRocks(embedding, config=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around StarRocks vector database You need a pymysql python package, and a valid account to connect to StarRocks. Right now StarRocks has only implemented cosine_similarity function to compute distance between two vectors. And there is no vector inside right now, so we have to iterate all vectors and compute spatial distance. For more information, please visit[StarRocks official site](https://www.starrocks.io/) [StarRocks github](https://github.com/StarRocks/starrocks) Parameters embedding (Embeddings) – config (Optional[StarRocksSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Insert more texts through the embeddings and add to the VectorStore. Parameters texts (Iterable[str]) – Iterable of strings to add to the VectorStore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the VectorStore. Return type List[str]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-114
List of ids from adding the texts into the VectorStore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create StarRocks wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (StarRocksSettings, Optional) – StarRocks configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to StarRocks. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns StarRocks Index Return type langchain.vectorstores.starrocks.StarRocks similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-115
Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str class langchain.vectorstores.SupabaseVectorStore(client, embedding, table_name, query_name=None)[source] Bases: langchain.vectorstores.base.VectorStore
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-116
Bases: langchain.vectorstores.base.VectorStore VectorStore for a Supabase postgres database. Assumes you have the pgvector extension installed and a match_documents (or similar) function. For more details: https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase You can implement your own match_documents function in order to limit the search space to a subset of documents based on your own authorization or business logic. Note that the Supabase Python client does not yet support async operations. If you’d like to use max_marginal_relevance_search, please review the instructions below on modifying the match_documents function to return matched embeddings. Parameters client (supabase.client.Client) – embedding (Embeddings) – table_name (str) – query_name (Union[str, None]) – Return type None table_name: str query_name: str add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict[Any, Any]]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, client=None, table_name='documents', query_name='match_documents', ids=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (Embeddings) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-117
Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – client (Optional[supabase.client.Client]) – table_name (Optional[str]) – query_name (Union[str, None]) – ids (Optional[List[str]]) – kwargs (Any) – Return type SupabaseVectorStore add_vectors(vectors, documents, ids)[source] Parameters vectors (List[List[float]]) – documents (List[langchain.schema.Document]) – ids (List[str]) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query (str) – input text k (int) – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include:
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-118
**kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns List of Tuples of (doc, similarity_score) Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector_with_relevance_scores(query, k)[source] Parameters query (List[float]) – k (int) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector_returning_embeddings(query, k)[source] Parameters query (List[float]) – k (int) – Return type List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-119
Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search requires that query_name returns matched embeddings alongside the match documents. The following function demonstrates how to do this: ```sql CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536), match_count int) RETURNS TABLE(id bigint, content text, metadata jsonb, embedding vector(1536), similarity float) LANGUAGE plpgsql AS $$ # variable_conflict use_column BEGINRETURN query SELECT id, content, metadata, embedding, 1 -(docstore.embedding <=> query_embedding) AS similarity FROMdocstore ORDER BYdocstore.embedding <=> query_embedding LIMIT match_count; END; $$; ``` delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-120
Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.Tair(embedding_function, url, index_name, content_key='content', metadata_key='metadata', search_params=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Tair Vector store. Parameters embedding_function (Embeddings) – url (str) – index_name (str) – content_key (str) – metadata_key (str) – search_params (Optional[dict]) – kwargs (Any) – create_index_if_not_exist(dim, distance_type, index_type, data_type, **kwargs)[source] Parameters dim (int) – distance_type (str) – index_type (str) – data_type (str) – kwargs (Any) – Return type bool add_texts(texts, metadatas=None, **kwargs)[source] Add texts data to an existing index. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-121
Return type List[Document] classmethod from_texts(texts, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair classmethod from_documents(documents, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair static drop_index(index_name='langchain', **kwargs)[source] Drop an existing index. Parameters index_name (str) – Name of the index to drop. kwargs (Any) – Returns True if the index is dropped successfully. Return type bool classmethod from_existing_index(embedding, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Connect to an existing Tair index. Parameters embedding (langchain.embeddings.base.Embeddings) – index_name (str) – content_key (str) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-122
index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair class langchain.vectorstores.Tigris(client, embeddings, index_name)[source] Bases: langchain.vectorstores.base.VectorStore Parameters client (TigrisClient) – embeddings (Embeddings) – index_name (str) – property search_index: TigrisVectorStore add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids for documents. Ids will be autogenerated if not provided. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – filter (Optional[TigrisFilter]) – kwargs (Any) – Return type List[Document] similarity_search_with_score(query, k=4, filter=None)[source] Run similarity search with Chroma with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[TigrisFilter]) – Filter by metadata. Defaults to None. Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-123
filter (Optional[TigrisFilter]) – Filter by metadata. Defaults to None. Returns List of documents most similar to the querytext with distance in float. Return type List[Tuple[Document, float]] classmethod from_texts(texts, embedding, metadatas=None, ids=None, client=None, index_name=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – client (Optional[TigrisClient]) – index_name (Optional[str]) – kwargs (Any) – Return type Tigris class langchain.vectorstores.Typesense(typesense_client, embedding, *, typesense_collection_name=None, text_key='text')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Typesense vector search. To use, you should have the typesense python package installed. Example from langchain.embedding.openai import OpenAIEmbeddings from langchain.vectorstores import Typesense import typesense node = { "host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net "port": "8108", # For Typesense Cloud use 443 "protocol": "http" # For Typesense Cloud use https } typesense_client = typesense.Client( { "nodes": [node], "api_key": "<API_KEY>", "connection_timeout_seconds": 2 } ) typesense_collection_name = "langchain-memory" embedding = OpenAIEmbeddings() vectorstore = Typesense( typesense_client=typesense_client, embedding=embedding,
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-124
typesense_client=typesense_client, embedding=embedding, typesense_collection_name=typesense_collection_name, text_key="text", ) Parameters typesense_client (Client) – embedding (Embeddings) – typesense_collection_name (Optional[str]) – text_key (str) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embedding and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=10, filter='')[source] Return typesense documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 10. Minimum 10 results would be returned. filter (Optional[str]) – typesense filter_by expression to filter documents on Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=10, filter='', **kwargs)[source] Return typesense documents most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 10. Minimum 10 results would be returned.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-125
Minimum 10 results would be returned. filter (Optional[str]) – typesense filter_by expression to filter documents on kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_client_params(embedding, *, host='localhost', port='8108', protocol='http', typesense_api_key=None, connection_timeout_seconds=2, **kwargs)[source] Initialize Typesense directly from client parameters. Example from langchain.embedding.openai import OpenAIEmbeddings from langchain.vectorstores import Typesense # Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY". vectorstore = Typesense( OpenAIEmbeddings(), host="localhost", port="8108", protocol="http", typesense_collection_name="langchain-memory", ) Parameters embedding (langchain.embeddings.base.Embeddings) – host (str) – port (Union[str, int]) – protocol (str) – typesense_api_key (Optional[str]) – connection_timeout_seconds (int) – kwargs (Any) – Return type langchain.vectorstores.typesense.Typesense classmethod from_texts(texts, embedding, metadatas=None, ids=None, typesense_client=None, typesense_client_params=None, typesense_collection_name=None, text_key='text', **kwargs)[source] Construct Typesense wrapper from raw text. Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – typesense_client (Optional[Client]) – typesense_client_params (Optional[dict]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-126
typesense_client_params (Optional[dict]) – typesense_collection_name (Optional[str]) – text_key (str) – kwargs (Any) – Return type Typesense class langchain.vectorstores.Vectara(vectara_customer_id=None, vectara_corpus_id=None, vectara_api_key=None)[source] Bases: langchain.vectorstores.base.VectorStore Implementation of Vector Store using Vectara (https://vectara.com). .. rubric:: Example from langchain.vectorstores import Vectara vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key ) Parameters vectara_customer_id (Optional[str]) – vectara_corpus_id (Optional[str]) – vectara_api_key (Optional[str]) – add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source] Return Vectara documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 5. lambda_val (float) – lexical match parameter for hybrid search.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-127
lambda_val (float) – lexical match parameter for hybrid search. filter (Optional[str]) – Dictionary of argument(s) to filter on metadata. For example a filter can be β€œdoc.rating > 3.0 and part.lang = β€˜deu’”} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. n_sentence_context (int) – number of sentences before/after the matching segment to add kwargs (Any) – Returns List of Documents most similar to the query and score for each. Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source] Return Vectara documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 5. filter (Optional[str]) – Dictionary of argument(s) to filter on metadata. For example a filter can be β€œdoc.rating > 3.0 and part.lang = β€˜deu’”} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. n_sentence_context (int) – number of sentences before/after the matching segment to add lambda_val (float) – kwargs (Any) – Returns List of Documents most similar to the query Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, **kwargs)[source] Construct Vectara wrapper from raw documents. This is intended to be a quick way to get started. .. rubric:: Example from langchain import Vectara
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-128
.. rubric:: Example from langchain import Vectara vectara = Vectara.from_texts( texts, vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key, ) Parameters texts (List[str]) – embedding (Optional[langchain.embeddings.base.Embeddings]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.vectara.Vectara as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.vectara.VectaraRetriever class langchain.vectorstores.VectorStore[source] Bases: abc.ABC Interface for vector stores. abstract add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] delete(ids)[source] Delete by vector ID. Parameters ids (List[str]) – List of ids to delete. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] async aadd_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-129
texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] add_documents(documents, **kwargs)[source] Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. documents (List[langchain.schema.Document]) – kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] async aadd_documents(documents, **kwargs)[source] Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. documents (List[langchain.schema.Document]) – kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] search(query, search_type, **kwargs)[source] Return docs most similar to query using specified search type. Parameters query (str) – search_type (str) – kwargs (Any) – Return type List[langchain.schema.Document] async asearch(query, search_type, **kwargs)[source] Return docs most similar to query using specified search type. Parameters query (str) – search_type (str) – kwargs (Any) – Return type List[langchain.schema.Document] abstract similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-130
kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query (str) – input text k (int) – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns List of Tuples of (doc, similarity_score) Return type List[Tuple[langchain.schema.Document, float]] async asimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] async asimilarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-131
Return type List[langchain.schema.Document] async asimilarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] async amax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-132
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] async amax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_documents(documents, embedding, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.base.VST async classmethod afrom_documents(documents, embedding, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-133
kwargs (Any) – Return type langchain.vectorstores.base.VST abstract classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.base.VST async classmethod afrom_texts(texts, embedding, metadatas=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.base.VST as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.base.VectorStoreRetriever class langchain.vectorstores.Weaviate(client, index_name, text_key, embedding=None, attributes=None, relevance_score_fn=<function _default_score_normalizer>, by_text=True)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Weaviate vector database. To use, you should have the weaviate-client python package installed. Example import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) Parameters client (Any) – index_name (str) – text_key (str) – embedding (Optional[Embeddings]) – attributes (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-134
embedding (Optional[Embeddings]) – attributes (Optional[List[str]]) – relevance_score_fn (Optional[Callable[[float], float]]) – by_text (bool) – add_texts(texts, metadatas=None, **kwargs)[source] Upload texts with metadata (properties) to Weaviate. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_text(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Look up similar documents by embedding vector in Weaviate. Parameters embedding (List[float]) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-135
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, **kwargs)[source] Return list of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-136
text and cosine distance in float for each. Lower score represents more similarity. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Construct Weaviate wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in the Weaviate instance. Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings, weaviate_url="http://localhost:8080" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.weaviate.Weaviate delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None
https://api.python.langchain.com/en/latest/modules/vectorstores.html
39d4921beaba-0
Agent Toolkits Agent toolkits.
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-1
langchain.agents.agent_toolkits.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-2
a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-3
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-4
langchain.agents.agent_toolkits.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False,
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-5
max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-6
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (Optional[str]) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-7
langchain.agents.agent_toolkits.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-8
Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-9
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – return_intermediate_steps (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-10
langchain.agents.agent_toolkits.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None,
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-11
know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-12
Construct a pbi agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-13
Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a pbi agent from an Chat LLM and tools.
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-14
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. Parameters llm (langchain.chat_models.base.BaseChatModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_python_agent(llm, tool, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, verbose=False, prefix='You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-15
Construct a python agent from an LLM and tool. Parameters llm (langchain.base_language.BaseLanguageModel) – tool (langchain.tools.python.tool.PythonREPLTool) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – prefix (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-16
class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a JSON spec. Parameters spec (langchain.tools.json.tool.JsonSpec) – Return type None attribute spec: langchain.tools.json.tool.JsonSpec [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.SQLDatabaseToolkit(*, db, llm)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with SQL databases. Parameters db (langchain.sql_database.SQLDatabase) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute db: langchain.sql_database.SQLDatabase [Required] attribute llm: langchain.base_language.BaseLanguageModel [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] property dialect: str Return string representation of dialect to use. class langchain.agents.agent_toolkits.SparkSQLToolkit(*, db, llm)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with Spark SQL. Parameters db (langchain.utilities.spark_sql.SparkSQL) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute db: langchain.utilities.spark_sql.SparkSQL [Required] attribute llm: langchain.base_language.BaseLanguageModel [Required] get_tools()[source] Get the tools in the toolkit. Return type
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-17
get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.NLAToolkit(*, nla_tools)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Natural Language API Toolkit Definition. Parameters nla_tools (Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool]) – Return type None attribute nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required] List of API Endpoint Tools. classmethod from_llm_and_ai_plugin(llm, ai_plugin, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – ai_plugin (langchain.tools.plugin.AIPlugin) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_ai_plugin_url(llm, ai_plugin_url, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – ai_plugin_url (str) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_spec(llm, spec, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit by creating tools for each operation. Parameters
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-18
Instantiate the toolkit by creating tools for each operation. Parameters llm (langchain.base_language.BaseLanguageModel) – spec (langchain.utilities.openapi.OpenAPISpec) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_url(llm, open_api_url, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – open_api_url (str) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit get_tools()[source] Get the tools for all the API operations. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.PowerBIToolkit(*, powerbi, llm, examples=None, max_iterations=5, callback_manager=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with PowerBI dataset. Parameters powerbi (langchain.utilities.powerbi.PowerBIDataset) – llm (langchain.base_language.BaseLanguageModel) – examples (Optional[str]) – max_iterations (int) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – Return type None attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None attribute examples: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-19
attribute examples: Optional[str] = None attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute max_iterations: int = 5 attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.OpenAPIToolkit(*, json_agent, requests_wrapper)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a OpenAPI api. Parameters json_agent (langchain.agents.agent.AgentExecutor) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None attribute json_agent: langchain.agents.agent.AgentExecutor [Required] attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required] classmethod from_llm(llm, json_spec, requests_wrapper, **kwargs)[source] Create json agent from llm, then initialize. Parameters llm (langchain.base_language.BaseLanguageModel) – json_spec (langchain.tools.json.tool.JsonSpec) – requests_wrapper (langchain.requests.TextRequestsWrapper) – kwargs (Any) – Return type langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.VectorStoreToolkit(*, vectorstore_info, llm=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a vector store. Parameters
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-20
Toolkit for interacting with a vector store. Parameters vectorstore_info (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Optional] attribute vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore router agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.VectorStoreInfo(*, vectorstore, name, description)[source] Bases: pydantic.main.BaseModel Information about a vectorstore. Parameters
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-21
Bases: pydantic.main.BaseModel Information about a vectorstore. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – name (str) – description (str) – Return type None attribute description: str [Required] attribute name: str [Required] attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] class langchain.agents.agent_toolkits.VectorStoreRouterToolkit(*, vectorstores, llm=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for routing between vector stores. Parameters vectorstores (List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo]) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Optional] attribute vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source] Construct a pandas agent from an LLM and dataframe. Parameters llm (langchain.base_language.BaseLanguageModel) – df (Any) – agent_type (langchain.agents.agent_types.AgentType) –
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-22
agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (Optional[str]) – suffix (Optional[str]) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – include_df_in_prompt (Optional[bool]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source] Construct a spark agent from an LLM and dataframe. Parameters llm (langchain.llms.base.BaseLLM) – df (Any) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) –
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-23
max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-24
langchain.agents.agent_toolkits.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10,
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-25
Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-26
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source] Create csv agent by loading to a dataframe and using pandas agent. Parameters llm (langchain.base_language.BaseLanguageModel) – path (Union[str, List[str]]) – pandas_kwargs (Optional[dict]) – kwargs (Any) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.ZapierToolkit(*, tools=[])[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Zapier Toolkit. Parameters tools (List[langchain.tools.base.BaseTool]) – Return type None attribute tools: List[langchain.tools.base.BaseTool] = [] async classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper)[source] Create a toolkit from a ZapierNLAWrapper. Parameters
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-27
Create a toolkit from a ZapierNLAWrapper. Parameters zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – Return type langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit classmethod from_zapier_nla_wrapper(zapier_nla_wrapper)[source] Create a toolkit from a ZapierNLAWrapper. Parameters zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – Return type langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.GmailToolkit(*, api_resource=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with Gmail. Parameters api_resource (Resource) – Return type None attribute api_resource: Resource [Optional] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.JiraToolkit(*, tools=[])[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Jira Toolkit. Parameters tools (List[langchain.tools.base.BaseTool]) – Return type None attribute tools: List[langchain.tools.base.BaseTool] = [] classmethod from_jira_api_wrapper(jira_api_wrapper)[source] Parameters jira_api_wrapper (langchain.utilities.jira.JiraAPIWrapper) – Return type langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit get_tools()[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-28
get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.FileManagementToolkit(*, root_dir=None, selected_tools=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a Local Files. Parameters root_dir (Optional[str]) – selected_tools (Optional[List[str]]) – Return type None attribute root_dir: Optional[str] = None If specified, all file operations are made relative to root_dir. attribute selected_tools: Optional[List[str]] = None If provided, only provide the selected tools. Defaults to all. get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.PlayWrightBrowserToolkit(*, sync_browser=None, async_browser=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for web browser tools. Parameters sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute async_browser: Optional['AsyncBrowser'] = None attribute sync_browser: Optional['SyncBrowser'] = None classmethod from_browser(sync_browser=None, async_browser=None)[source] Instantiate the toolkit. Parameters sync_browser (Optional[SyncBrowser]) – async_browser (Optional[AsyncBrowser]) – Return type PlayWrightBrowserToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
39d4921beaba-29
class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for Azure Cognitive Services. Return type None get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool]
https://api.python.langchain.com/en/latest/modules/agent_toolkits.html
57cc15045056-0
Output Parsers class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source] Bases: langchain.schema.BaseOutputParser[bool] Parameters true_val (str) – false_val (str) – Return type None attribute false_val: str = 'NO' attribute true_val: str = 'YES' parse(text)[source] Parse the output of an LLM call to a boolean. Parameters text (str) – output of language model Returns boolean Return type bool class langchain.output_parsers.CombiningOutputParser(*, parsers)[source] Bases: langchain.schema.BaseOutputParser Class to combine multiple output parsers into one. Parameters parsers (List[langchain.schema.BaseOutputParser]) – Return type None attribute parsers: List[langchain.schema.BaseOutputParser] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, Any] class langchain.output_parsers.CommaSeparatedListOutputParser[source] Bases: langchain.output_parsers.list.ListOutputParser Parse out comma separated lists. Return type None get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str]
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-1
Parameters text (str) – Return type List[str] class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source] Bases: langchain.schema.BaseOutputParser[datetime.datetime] Parameters format (str) – Return type None attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ' get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type datetime.datetime class langchain.output_parsers.EnumOutputParser(*, enum)[source] Bases: langchain.schema.BaseOutputParser Parameters enum (Type[enum.Enum]) – Return type None attribute enum: Type[enum.Enum] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type Any class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source] Bases: langchain.schema.BaseOutputParser Parameters guard (Any) –
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-2
Bases: langchain.schema.BaseOutputParser Parameters guard (Any) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type None attribute api: Optional[Callable] = None attribute args: Any = None attribute guard: Any = None attribute kwargs: Any = None classmethod from_pydantic(output_class, num_reasks=1, api=None, *args, **kwargs)[source] Parameters output_class (Any) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_file (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_str (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call.
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-3
str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Dict class langchain.output_parsers.ListOutputParser[source] Bases: langchain.schema.BaseOutputParser Class to parse the output of an LLM call to a list. Return type None abstract parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str] class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] Wraps a parser and tries to fix parsing errors. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required]
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-4
attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.fix.T class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T] Parameters pydantic_object (Type[langchain.output_parsers.pydantic.T]) – Return type None
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-5
Return type None attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type langchain.output_parsers.pydantic.T class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex_pattern (str) – output_key_to_format (Dict[str, str]) – no_update_value (Optional[str]) – Return type None attribute no_update_value: Optional[str] = None attribute output_key_to_format: Dict[str, str] [Required] attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?" parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex (str) – output_keys (List[str]) – default_output_key (Optional[str]) – Return type None
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-6
default_output_key (Optional[str]) – Return type None attribute default_output_key: Optional[str] = None attribute output_keys: List[str] [Required] attribute regex: str [Required] parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source] Bases: pydantic.main.BaseModel Parameters name (str) – description (str) – type (str) – Return type None attribute description: str [Required] attribute name: str [Required] attribute type: str = 'string' class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt and the completion to another LLM, and telling it the completion did not satisfy criteria in the prompt. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required]
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-7
attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-8
Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error that was raised back to the LLM, which in theory should give it more information on how to fix it. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T]
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-9
get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source] Bases: langchain.schema.BaseOutputParser Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type None attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required] classmethod from_response_schemas(response_schemas)[source] Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type langchain.output_parsers.structured.StructuredOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call.
https://api.python.langchain.com/en/latest/modules/output_parsers.html
57cc15045056-10
str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Any
https://api.python.langchain.com/en/latest/modules/output_parsers.html
46d47edcfcf7-0
Embeddings Wrappers around embedding modules. class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around OpenAI embedding models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to β€˜azure’ and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-1
from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", openai_api_base="https://your-endpoint.openai.azure.com/", openai_api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – deployment (str) – openai_api_version (Optional[str]) – openai_api_base (Optional[str]) – openai_api_type (Optional[str]) – openai_proxy (Optional[str]) – embedding_ctx_length (int) – openai_api_key (Optional[str]) – openai_organization (Optional[str]) – allowed_special (Union[Literal['all'], typing.Set[str]]) – disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) – chunk_size (int) – max_retries (int) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – headers (Any) – tiktoken_model_name (Optional[str]) – Return type None attribute chunk_size: int = 1000 Maximum number of texts to embed in each batch attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout in seconds for the OpenAPI request. attribute tiktoken_model_name: Optional[str] = None The model name to pass to tiktoken when using this class.
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-2
The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. async aembed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint async for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text)[source] Call out to OpenAI’s embedding endpoint async for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] embed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source]
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-3
Return type List[List[float]] embed_query(text)[source] Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': False} hf = HuggingFaceEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use.
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-4
Model name to use. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings( model="embed-english-light-v2.0", cohere_api_key="my-api-key" ) Parameters client (Any) – model (str) – truncate (Optional[str]) – cohere_api_key (Optional[str]) – Return type None attribute model: str = 'embed-english-v2.0' Model name to use. attribute truncate: Optional[str] = None Truncate embeddings that are too long from start or end (β€œNONE”|”START”|”END”) embed_documents(texts)[source] Call out to Cohere’s embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-5
Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Cohere’s embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source] Bases: langchain.embeddings.base.Embeddings Wrapper around Elasticsearch embedding models. This class provides an interface to generate embeddings using a model deployed in an Elasticsearch cluster. It requires an Elasticsearch connection object and the model_id of the model deployed in the cluster. In Elasticsearch you need to have an embedding model loaded and deployed. - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html Parameters client (MlClient) – model_id (str) – input_field (str) – classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source] Instantiate embeddings from Elasticsearch credentials. Parameters model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to β€˜text_field’. es_cloud_id (Optional[str]) – (str, optional): The Elasticsearch cloud ID to connect to. es_user (Optional[str]) – (str, optional): Elasticsearch username. es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-6
es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type langchain.embeddings.elasticsearch.ElasticsearchEmbeddings Example from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Credentials can be passed in two ways. Either set the env vars # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically # pulled in, or pass them in directly as kwargs. embeddings = ElasticsearchEmbeddings.from_credentials( model_id, input_field=input_field, # es_cloud_id="foo", # es_user="bar", # es_password="baz", ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source] Instantiate embeddings from an existing Elasticsearch connection. This method provides a way to create an instance of the ElasticsearchEmbeddings class using an existing Elasticsearch connection. The connection object is used to create an MlClient, which is then used to initialize the ElasticsearchEmbeddings instance. Args: model_id (str): The model_id of the model deployed in the Elasticsearch cluster. es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch connection object. input_field (str, optional): The name of the key for the input text field in the document. Defaults to β€˜text_field’. Returns: ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class. Example from elasticsearch import Elasticsearch
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-7
Example from elasticsearch import Elasticsearch from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Create Elasticsearch connection es_connection = Elasticsearch( hosts=["localhost:9200"], http_auth=("user", "password") ) # Instantiate ElasticsearchEmbeddings using the existing connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, input_field=input_field, ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) Parameters model_id (str) – es_connection (Elasticsearch) – input_field (str) – Return type ElasticsearchEmbeddings embed_documents(texts)[source] Generate embeddings for a list of documents. Parameters texts (List[str]) – A list of document text strings to generate embeddings for. Returns A list of embeddings, one for each document in the inputlist. Return type List[List[float]] embed_query(text)[source] Generate an embedding for a single query text. Parameters text (str) – The query text to generate an embedding for. Returns The embedding for the input query text. Return type List[float] class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-8
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around llama.cpp embedding models. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: https://github.com/abetlen/llama-cpp-python Example from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model.bin") Parameters client (Any) – model_path (str) – n_ctx (int) – n_parts (int) – seed (int) – f16_kv (bool) – logits_all (bool) – vocab_only (bool) – use_mlock (bool) – n_threads (Optional[int]) – n_batch (Optional[int]) – n_gpu_layers (Optional[int]) – Return type None attribute f16_kv: bool = False Use half-precision for key/value cache. attribute logits_all: bool = False Return logits for all tokens, not just the last token. attribute n_batch: Optional[int] = 8 Number of tokens to process in parallel. Should be a number between 1 and n_ctx. attribute n_ctx: int = 512 Token context window. attribute n_gpu_layers: Optional[int] = None Number of layers to be loaded into gpu memory. Default None. attribute n_parts: int = -1 Number of parts to split the model into. If -1, the number of parts is automatically determined. attribute n_threads: Optional[int] = None Number of threads to use. If None, the number
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-9
Number of threads to use. If None, the number of threads is automatically determined. attribute seed: int = -1 Seed. If -1, a random seed is used. attribute use_mlock: bool = False Force system to keep model in RAM. attribute vocab_only: bool = False Only load the vocabulary, no weights. embed_documents(texts)[source] Embed a list of documents using the Llama model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using the Llama model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around HuggingFaceHub embedding models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import HuggingFaceHubEmbeddings repo_id = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="my-api-key", ) Parameters client (Any) –
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-10
) Parameters client (Any) – repo_id (str) – task (Optional[str]) – model_kwargs (Optional[dict]) – huggingfacehub_api_token (Optional[str]) – Return type None attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute task: Optional[str] = 'feature-extraction' Task to call the model with. embed_documents(texts)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around modelscope_hub embedding models. To use, you should have the modelscope python package installed. Example from langchain.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embed = ModelScopeEmbeddings(model_id=model_id) Parameters embed (Any) – model_id (str) – Return type None
https://api.python.langchain.com/en/latest/modules/embeddings.html
46d47edcfcf7-11
embed (Any) – model_id (str) – Return type None attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a modelscope embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a modelscope embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around tensorflow_hub embedding models. To use, you should have the tensorflow_text python package installed. Example from langchain.embeddings import TensorflowHubEmbeddings url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3" tf = TensorflowHubEmbeddings(model_url=url) Parameters embed (Any) – model_url (str) – Return type None attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a TensorflowHub embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]]
https://api.python.langchain.com/en/latest/modules/embeddings.html