id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
160
|
---|---|---|
ffc237e83342-7
|
efficient_filter: the Lucene Engine or Faiss Engine decides whether to
perform an exact k-NN search with pre-filtering or an approximate search
with modified post-filtering.
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search”
space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html
|
ffc237e83342-8
|
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and it’s scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents along with its scores most similar to the query.
Optional Args:same as similarity_search
property embeddings: langchain.embeddings.base.Embeddings¶
Access the query embedding object if available.
Examples using OpenSearchVectorSearch¶
OpenSearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html
|
5c28b2fa8ed1-0
|
langchain.vectorstores.base.VectorStore¶
class langchain.vectorstores.base.VectorStore[source]¶
Bases: ABC
Interface for vector stores.
Methods
__init__()
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
5c28b2fa8ed1-1
|
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str][source]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str][source]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
5c28b2fa8ed1-2
|
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST[source]¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST[source]¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever[source]¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
5c28b2fa8ed1-3
|
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool][source]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST[source]¶
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST[source]¶
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
5c28b2fa8ed1-4
|
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query using specified search type.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
5c28b2fa8ed1-5
|
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Run similarity search with distance.
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using VectorStore¶
BabyAGI User Guide
BabyAGI with Tools
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.base.VectorStore.html
|
1e8218dbe9c4-0
|
langchain.vectorstores.sklearn.ParquetSerializer¶
class langchain.vectorstores.sklearn.ParquetSerializer(persist_path: str)[source]¶
Bases: BaseSerializer
Serializes data in Apache Parquet format using the pyarrow package.
Methods
__init__(persist_path)
extension()
The file extension suggested by this serializer (without dot).
load()
Loads the data from the persist_path
save(data)
Saves the data to the persist_path
classmethod extension() → str[source]¶
The file extension suggested by this serializer (without dot).
load() → Any[source]¶
Loads the data from the persist_path
save(data: Any) → None[source]¶
Saves the data to the persist_path
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.ParquetSerializer.html
|
f55a4c1f4f68-0
|
langchain.vectorstores.annoy.Annoy¶
class langchain.vectorstores.annoy.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: Docstore, index_to_docstore_id: Dict[int, str])[source]¶
Bases: VectorStore
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
Initialize with necessary components.
Methods
__init__(embedding_function, index, metric, ...)
Initialize with necessary components.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-1
|
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_embeddings(text_embeddings, embedding)
Construct Annoy wrapper from embeddings.
from_texts(texts, embedding[, metadatas, ...])
Construct Annoy wrapper from raw documents.
load_local(folder_path, embeddings)
Load Annoy index, docstore, and index_to_docstore_id to disk.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
process_index_results(idxs, dists)
Turns annoy results into a list of documents and scores.
save_local(folder_path[, prefault])
Save Annoy index, docstore, and index_to_docstore_id to disk.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, search_k])
Return docs most similar to query.
similarity_search_by_index(docstore_index[, ...])
Return docs most similar to docstore_index.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-2
|
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, ...])
Return docs most similar to query.
similarity_search_with_score_by_index(...[, ...])
Return docs most similar to query.
similarity_search_with_score_by_vector(embedding)
Return docs most similar to query.
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-3
|
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-4
|
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) → Annoy[source]¶
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings – List of tuples of (text, embedding)
embedding – Embedding function to use.
metadatas – List of metadata dictionaries to associate with documents.
metric – Metric to use for indexing. Defaults to “angular”.
trees – Number of trees to use for indexing. Defaults to 100.
n_jobs – Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-5
|
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) → Annoy[source]¶
Construct Annoy wrapper from raw documents.
Parameters
texts – List of documents to index.
embedding – Embedding function to use.
metadatas – List of metadata dictionaries to associate with documents.
metric – Metric to use for indexing. Defaults to “angular”.
trees – Number of trees to use for indexing. Defaults to 100.
n_jobs – Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: Embeddings) → Annoy[source]¶
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-6
|
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
k – Number of Documents to return. Defaults to 4.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
process_index_results(idxs: List[int], dists: List[float]) → List[Tuple[Document, float]][source]¶
Turns annoy results into a list of documents and scores.
Parameters
idxs – List of indices of the documents in the index.
dists – List of distances of the documents in the index.
Returns
List of Documents and scores.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-7
|
Returns
List of Documents and scores.
save_local(folder_path: str, prefault: bool = False) → None[source]¶
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to.
prefault – Whether to pre-load the index into memory.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query.
similarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) → List[Document][source]¶
Return docs most similar to docstore_index.
Parameters
docstore_index – Index of document in docstore
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-8
|
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
f55a4c1f4f68-9
|
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using Annoy¶
Annoy
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.annoy.Annoy.html
|
8fa6392f4bdd-0
|
langchain.vectorstores.starrocks.has_mul_sub_str¶
langchain.vectorstores.starrocks.has_mul_sub_str(s: str, *args: Any) → bool[source]¶
Check if a string has multiple substrings.
:param s: The string to check
:param *args: The substrings to check for in the string
Returns
True if all substrings are present in the string, False otherwise
Return type
bool
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.has_mul_sub_str.html
|
bafd4b843899-0
|
langchain.vectorstores.starrocks.StarRocks¶
class langchain.vectorstores.starrocks.StarRocks(embedding: Embeddings, config: Optional[StarRocksSettings] = None, **kwargs: Any)[source]¶
Bases: VectorStore
Wrapper around StarRocks vector database
You need a pymysql python package, and a valid account
to connect to StarRocks.
Right now StarRocks has only implemented cosine_similarity function to
compute distance between two vectors. And there is no vector inside right now,
so we have to iterate all vectors and compute spatial distance.
For more information, please visit[StarRocks official site](https://www.starrocks.io/)
[StarRocks github](https://github.com/StarRocks/starrocks)
StarRocks Wrapper to LangChain
embedding_function (Embeddings):
config (StarRocksSettings): Configuration to StarRocks Client
Methods
__init__(embedding[, config])
StarRocks Wrapper to LangChain
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, batch_size, ids])
Insert more texts through the embeddings and add to the VectorStore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-1
|
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete by vector ID or other criteria.
drop()
Helper function: Drop data
escape_str(value)
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Create StarRocks wrapper with existing texts
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, where_str])
Perform a similarity search with StarRocks
similarity_search_by_vector(embedding[, k, ...])
Perform a similarity search with StarRocks by vectors
similarity_search_with_relevance_scores(query)
Perform a similarity search with StarRocks
similarity_search_with_score(*args, **kwargs)
Run similarity search with distance.
Attributes
embeddings
Access the query embedding object if available.
metadata_column
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-2
|
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) → List[str][source]¶
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts – Iterable of strings to add to the VectorStore.
ids – Optional list of ids to associate with the texts.
batch_size – Batch size of insertion
metadata – Optional column data to be inserted
Returns
List of ids from adding the texts into the VectorStore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-3
|
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
drop() → None[source]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-4
|
Return type
Optional[bool]
drop() → None[source]¶
Helper function: Drop data
escape_str(value: str) → str[source]¶
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[StarRocksSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) → StarRocks[source]¶
Create StarRocks wrapper with existing texts
Parameters
embedding_function (Embeddings) – Function to extract text embedding
texts (Iterable[str]) – List or tuple of strings to be added
config (StarRocksSettings, Optional) – StarRocks configuration
text_ids (Optional[Iterable], optional) – IDs for the texts.
Defaults to None.
batch_size (int, optional) – Batchsize when transmitting data to StarRocks.
Defaults to 32.
metadata (List[dict], optional) – metadata to texts. Defaults to None.
Returns
StarRocks Index
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-5
|
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Perform a similarity search with StarRocks
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-6
|
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Perform a similarity search with StarRocks by vectors
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Perform a similarity search with StarRocks
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents
Return type
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
bafd4b843899-7
|
alone. The default name for it is metadata.
Returns
List of documents
Return type
List[Document]
similarity_search_with_score(*args: Any, **kwargs: Any) → List[Tuple[Document, float]]¶
Run similarity search with distance.
property embeddings: langchain.embeddings.base.Embeddings¶
Access the query embedding object if available.
property metadata_column: str¶
Examples using StarRocks¶
StarRocks
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocks.html
|
6df75d655f2e-0
|
langchain.vectorstores.hologres.Hologres¶
class langchain.vectorstores.hologres.Hologres(connection_string: str, embedding_function: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', pre_delete_table: bool = False, logger: Optional[Logger] = None)[source]¶
Bases: VectorStore
VectorStore implementation using Hologres.
connection_string is a hologres connection string.
embedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface.
ndims is the number of dimensions of the embedding output.
table_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)
- NOTE: The table will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_table if True, will delete the table if it exists.(default: False)
- Useful for testing.
Methods
__init__(connection_string, embedding_function)
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_embeddings(texts, embeddings, metadatas, ...)
Add embeddings to the vectorstore.
add_texts(texts[, metadatas, ids])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-1
|
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
connection_string_from_db_params(host, port, ...)
Return connection string from database parameters.
create_table()
create_vector_extension()
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding[, ...])
Return VectorStore initialized from documents and embeddings.
from_embeddings(text_embeddings, embedding)
Construct Hologres wrapper from raw documents and pre- generated embeddings.
from_existing_index(embedding[, ndims, ...])
Get intsance of an existing Hologres store.This method will return the instance of the store without inserting any new embeddings
from_texts(texts, embedding[, metadatas, ...])
Return VectorStore initialized from texts and embeddings.
get_connection_string(kwargs)
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter])
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-2
|
similarity_search(query[, k, filter])
Run similarity search with Hologres with distance.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, filter])
Return docs most similar to query.
similarity_search_with_score_by_vector(embedding)
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_embeddings(texts: Iterable[str], embeddings: List[List[float]], metadatas: List[dict], ids: List[str], **kwargs: Any) → None[source]¶
Add embeddings to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
embeddings – List of list of embedding vectors.
metadatas – List of metadatas associated with the texts.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-3
|
metadatas – List of metadatas associated with the texts.
kwargs – vectorstore specific parameters
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-4
|
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
classmethod connection_string_from_db_params(host: str, port: int, database: str, user: str, password: str) → str[source]¶
Return connection string from database parameters.
create_table() → None[source]¶
create_vector_extension() → None[source]¶
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) → Hologres[source]¶
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
“Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-5
|
“Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_table: bool = False, **kwargs: Any) → Hologres[source]¶
Construct Hologres wrapper from raw documents and pre-
generated embeddings.
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
“Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Example
from langchain import Hologres
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_existing_index(embedding: Embeddings, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', pre_delete_table: bool = False, **kwargs: Any) → Hologres[source]¶
Get intsance of an existing Hologres store.This method will
return the instance of the store without inserting any new
embeddings
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ndims: int = 1536, table_name: str = 'langchain_pg_embedding', ids: Optional[List[str]] = None, pre_delete_table: bool = False, **kwargs: Any) → Hologres[source]¶
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-6
|
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
“Either pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
classmethod get_connection_string(kwargs: Dict[str, Any]) → str[source]¶
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-7
|
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Run similarity search with Hologres with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
6df75d655f2e-8
|
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) → List[Tuple[Document, float]][source]¶
property embeddings: langchain.embeddings.base.Embeddings¶
Access the query embedding object if available.
Examples using Hologres¶
Hologres
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.hologres.Hologres.html
|
2d1445d2d162-0
|
langchain.vectorstores.elastic_vector_search.ElasticKnnSearch¶
class langchain.vectorstores.elastic_vector_search.ElasticKnnSearch(index_name: str, embedding: Embeddings, es_connection: Optional['Elasticsearch'] = None, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, vector_query_field: Optional[str] = 'vector', query_field: Optional[str] = 'text')[source]¶
Bases: VectorStore, ABC
ElasticKnnSearch is a class for performing k-nearest neighbor
(k-NN) searches on text data using Elasticsearch.
This class is used to create an Elasticsearch index of text data that
can be searched using k-NN search. The text data is transformed into
vector embeddings using a provided embedding model, and these embeddings
are stored in the Elasticsearch index.
index_name¶
The name of the Elasticsearch index.
Type
str
embedding¶
The embedding model to use for transforming text data
into vector embeddings.
Type
Embeddings
es_connection¶
An existing Elasticsearch connection.
Type
Elasticsearch, optional
es_cloud_id¶
The Cloud ID of your Elasticsearch Service
deployment.
Type
str, optional
es_user¶
The username for your Elasticsearch Service deployment.
Type
str, optional
es_password¶
The password for your Elasticsearch Service
deployment.
Type
str, optional
vector_query_field¶
The name of the field in the Elasticsearch
index that contains the vector embeddings.
Type
str, optional
query_field¶
The name of the field in the Elasticsearch index
that contains the original text data.
Type
str, optional
Usage:>>> from embeddings import Embeddings
>>> embedding = Embeddings.load('glove')
>>> es_search = ElasticKnnSearch('my_index', embedding)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-1
|
>>> es_search = ElasticKnnSearch('my_index', embedding)
>>> es_search.add_texts(['Hello world!', 'Another text'])
>>> results = es_search.knn_search('Hello')
[(Document(page_content='Hello world!', metadata={}), 0.9)]
Methods
__init__(index_name, embedding[, ...])
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, model_id, ...])
Add a list of texts to the Elasticsearch index.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
create_knn_index(mapping)
Create a new k-NN index in Elasticsearch.
delete([ids])
Delete by vector ID or other criteria.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-2
|
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas])
Create a new ElasticKnnSearch instance and add a list of texts to the
knn_hybrid_search([query, k, query_vector, ...])
Perform a hybrid k-NN and text search on the Elasticsearch index.
knn_search([query, k, query_vector, ...])
Perform a k-NN search on the Elasticsearch index.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter])
Pass through to knn_search
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Pass through to knn_search including score
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-3
|
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[Dict[Any, Any]]] = None, model_id: Optional[str] = None, refresh_indices: bool = False, **kwargs: Any) → List[str][source]¶
Add a list of texts to the Elasticsearch index.
Parameters
texts (Iterable[str]) – The texts to add to the index.
metadatas (List[Dict[Any, Any]], optional) – A list of metadata dictionaries
to associate with the texts.
model_id (str, optional) – The ID of the model to use for transforming the
texts into vectors.
refresh_indices (bool, optional) – Whether to refresh the Elasticsearch
indices after adding the texts.
**kwargs – Arbitrary keyword arguments.
Returns
A list of IDs for the added texts.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-4
|
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
create_knn_index(mapping: Dict) → None[source]¶
Create a new k-NN index in Elasticsearch.
Parameters
mapping (Dict) – The mapping to use for the new index.
Returns
None
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-5
|
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) → ElasticKnnSearch[source]¶
Create a new ElasticKnnSearch instance and add a list of texts to theElasticsearch index.
Parameters
texts (List[str]) – The texts to add to the index.
embedding (Embeddings) – The embedding model to use for transforming the
texts into vectors.
metadatas (List[Dict[Any, Any]], optional) – A list of metadata dictionaries
to associate with the texts.
**kwargs – Arbitrary keyword arguments.
Returns
A new ElasticKnnSearch instance.
knn_hybrid_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, knn_boost: Optional[float] = 0.9, query_boost: Optional[float] = 0.1, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None, page_content: Optional[str] = 'text') → List[Tuple[Document, float]][source]¶
Perform a hybrid k-NN and text search on the Elasticsearch index.
Parameters
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-6
|
Perform a hybrid k-NN and text search on the Elasticsearch index.
Parameters
query (str, optional) – The query text to search for.
k (int, optional) – The number of nearest neighbors to return.
query_vector (List[float], optional) – The query vector to search for.
model_id (str, optional) – The ID of the model to use for transforming the
query text into a vector.
size (int, optional) – The number of search results to return.
source (bool, optional) – Whether to return the source of the search results.
knn_boost (float, optional) – The boost value to apply to the k-NN search
results.
query_boost (float, optional) – The boost value to apply to the text search
results.
fields (List[Mapping[str, Any]], optional) – The fields to return in the
search results.
page_content (str, optional) – The name of the field that contains the page
content.
Returns
A list of tuples, where each tuple contains a Document object and a score.
knn_search(query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, fields: Optional[Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...]]] = None, page_content: Optional[str] = 'text') → List[Tuple[Document, float]][source]¶
Perform a k-NN search on the Elasticsearch index.
Parameters
query (str, optional) – The query text to search for.
k (int, optional) – The number of nearest neighbors to return.
query_vector (List[float], optional) – The query vector to search for.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-7
|
query_vector (List[float], optional) – The query vector to search for.
model_id (str, optional) – The ID of the model to use for transforming the
query text into a vector.
size (int, optional) – The number of search results to return.
source (bool, optional) – Whether to return the source of the search results.
fields (List[Mapping[str, Any]], optional) – The fields to return in the
search results.
page_content (str, optional) – The name of the field that contains the page
content.
Returns
A list of tuples, where each tuple contains a Document object and a score.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-8
|
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Pass through to knn_search
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
2d1445d2d162-9
|
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 10, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Pass through to knn_search including score
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using ElasticKnnSearch¶
ElasticSearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticKnnSearch.html
|
729687756d5e-0
|
langchain.vectorstores.pinecone.Pinecone¶
class langchain.vectorstores.pinecone.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None, distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE)[source]¶
Bases: VectorStore
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
Initialize with Pinecone client.
Methods
__init__(index, embedding_function, text_key)
Initialize with Pinecone client.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, ids, ...])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-1
|
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids, delete_all, namespace, filter])
Delete by vector IDs or filter.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_existing_index(index_name, embedding[, ...])
Load pinecone vectorstore from index name.
from_texts(texts, embedding[, metadatas, ...])
Construct Pinecone wrapper from raw documents.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter, namespace])
Return pinecone documents most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, ...])
Return pinecone documents most similar to query, along with scores.
Attributes
embeddings
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-2
|
Return pinecone documents most similar to query, along with scores.
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
namespace – Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-3
|
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-4
|
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, delete_all: Optional[bool] = None, namespace: Optional[str] = None, filter: Optional[dict] = None, **kwargs: Any) → None[source]¶
Delete by vector IDs or filter.
:param ids: List of ids to delete.
:param filter: Dictionary of conditions to filter vectors to delete.
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_existing_index(index_name: str, embedding: Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → Pinecone[source]¶
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, upsert_kwargs: Optional[dict] = None, **kwargs: Any) → Pinecone[source]¶
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-5
|
texts,
embeddings,
index_name="langchain-demo"
)
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-6
|
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Return pinecone documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
729687756d5e-7
|
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[Document, float]][source]¶
Return pinecone documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using Pinecone¶
Pinecone
Self-querying with Pinecone
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html
|
1d5d4dc10510-0
|
langchain.vectorstores.elastic_vector_search.ElasticVectorSearch¶
class langchain.vectorstores.elastic_vector_search.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]¶
Bases: VectorStore, ABC
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-1
|
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) – The URL for the Elasticsearch instance.
index_name (str) – The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) – An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError – If the elasticsearch python package is not installed.
Initialize with necessary components.
Methods
__init__(elasticsearch_url, index_name, ...)
Initialize with necessary components.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, ids, ...])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-2
|
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
client_search(client, index_name, ...)
create_index(client, index_name, mapping)
delete([ids])
Delete by vector IDs.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Construct ElasticVectorSearch wrapper from raw documents.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, filter])
Return docs most similar to query.
Attributes
embeddings
Access the query embedding object if available.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-3
|
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh_indices: bool = True, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of unique IDs.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-4
|
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
client_search(client: Any, index_name: str, script_query: Dict, size: int) → Any[source]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-5
|
create_index(client: Any, index_name: str, mapping: Dict) → None[source]¶
delete(ids: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Delete by vector IDs.
Parameters
ids – List of ids to delete.
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) → ElasticVectorSearch[source]¶
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-6
|
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
1d5d4dc10510-7
|
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
property embeddings: langchain.embeddings.base.Embeddings¶
Access the query embedding object if available.
Examples using ElasticVectorSearch¶
ElasticSearch
How to add memory to a Multi-Input Chain
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elastic_vector_search.ElasticVectorSearch.html
|
783eb95095c2-0
|
langchain.vectorstores.sklearn.BaseSerializer¶
class langchain.vectorstores.sklearn.BaseSerializer(persist_path: str)[source]¶
Bases: ABC
Abstract base class for saving and loading data.
Methods
__init__(persist_path)
extension()
The file extension suggested by this serializer (without dot).
load()
Loads the data from the persist_path
save(data)
Saves the data to the persist_path
abstract classmethod extension() → str[source]¶
The file extension suggested by this serializer (without dot).
abstract load() → Any[source]¶
Loads the data from the persist_path
abstract save(data: Any) → None[source]¶
Saves the data to the persist_path
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.sklearn.BaseSerializer.html
|
f5f8c13a2320-0
|
langchain.vectorstores.starrocks.StarRocksSettings¶
class langchain.vectorstores.starrocks.StarRocksSettings(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, host: str = 'localhost', port: int = 9030, username: str = 'root', password: str = '', column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata'}, database: str = 'default', table: str = 'langchain')[source]¶
Bases: BaseSettings
StarRocks Client Configuration
Attribute:
StarRocks_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’.
StarRocks_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
database (str) : Database name to find the table. Defaults to ‘default’.
table (str) : Table name to operate on.
Defaults to ‘vector_table’.
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{‘id’: ‘text_id’,
‘embedding’: ‘text_embedding’,
‘document’: ‘text_plain’,
‘metadata’: ‘metadata_dictionary_in_json’,
}
Defaults to identity map.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocksSettings.html
|
f5f8c13a2320-1
|
‘metadata’: ‘metadata_dictionary_in_json’,
}
Defaults to identity map.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata'}¶
param database: str = 'default'¶
param host: str = 'localhost'¶
param password: str = ''¶
param port: int = 9030¶
param table: str = 'langchain'¶
param username: str = 'root'¶
model Config[source]¶
Bases: object
env_file = '.env'¶
env_file_encoding = 'utf-8'¶
env_prefix = 'starrocks_'¶
Examples using StarRocksSettings¶
StarRocks
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.starrocks.StarRocksSettings.html
|
f3ed46af16d1-0
|
langchain.vectorstores.deeplake.DeepLake¶
class langchain.vectorstores.deeplake.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding: Optional[Embeddings] = None, embedding_function: Optional[Embeddings] = None, read_only: bool = False, ingestion_batch_size: int = 1000, num_workers: int = 0, verbose: bool = True, exec_option: Optional[str] = None, **kwargs: Any)[source]¶
Bases: VectorStore
Wrapper around Deep Lake, a data lake for deep learning applications.
We integrated deeplake’s similarity search and filtering for fast prototyping,
Now, it supports Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
Creates an empty DeepLakeVectorStore or loads an existing one.
The DeepLakeVectorStore is located at the specified path.
Examples
>>> # Create a vector store with default tensors
>>> deeplake_vectorstore = DeepLake(
... path = <path_for_storing_Data>,
... )
>>>
>>> # Create a vector store in the Deep Lake Managed Tensor Database
>>> data = DeepLake(
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-1
|
>>> data = DeepLake(
... path = "hub://org_id/dataset_name",
... exec_option = "tensor_db",
... )
Parameters
dataset_path (str) – Path to existing dataset or where to create
a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH.
token (str, optional) – Activeloop token, for fetching credentials
to the dataset at path if it is a Deep Lake dataset.
Tokens are normally autogenerated. Optional.
embedding (Embeddings, optional) – Function to convert
either documents or query. Optional.
embedding_function (Embeddings, optional) – Function to convert
either documents or query. Optional. Deprecated: keeping this
parameter for backwards compatibility.
read_only (bool) – Open dataset in read-only mode. Default is False.
ingestion_batch_size (int) – During data ingestion, data is divided
into batches. Batch size is the size of each batch.
Default is 1000.
num_workers (int) – Number of workers to use during data ingestion.
Default is 0.
verbose (bool) – Print dataset summary after each operation.
Default is True.
exec_option (str, optional) – DeepLakeVectorStore supports 3 ways to perform
searching - “python”, “compute_engine”, “tensor_db” and auto.
Default is None.
- auto- Selects the best execution method based on the storage
location of the Vector Store. It is the default option.
python - Pure-python implementation that runs on the client.WARNING: using this with big datasets can lead to memory
issues. Data can be stored anywhere.
compute_engine - C++ implementation of the Deep Lake ComputeEngine that runs on the client. Can be used for any data stored in
or connected to Deep Lake. Not for in-memory or local datasets.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-2
|
or connected to Deep Lake. Not for in-memory or local datasets.
tensor_db - Hosted Managed Tensor Database that isresponsible for storage and query execution. Only for data stored in
the Deep Lake Managed Database. Use runtime = {“db_engine”: True}
during dataset creation.
**kwargs – Other optional keyword arguments.
Raises
ValueError – If some condition is not met.
Methods
__init__([dataset_path, token, embedding, ...])
Creates an empty DeepLakeVectorStore or loads an existing one.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, ids])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas])
Return VectorStore initialized from texts and embeddings.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-3
|
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
delete([ids])
Delete the entities in the dataset.
delete_dataset()
Delete the collection.
ds()
force_delete_by_path(path)
Force delete dataset by path.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts[, embedding, metadatas, ...])
Create a Deep Lake dataset from a raw documents.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k])
Run similarity search with Deep Lake with distance returned.
Attributes
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-4
|
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Examples
>>> ids = deeplake_vectorstore.add_texts(
... texts = <list_of_texts>,
... metadatas = <list_of_metadata_jsons>,
... ids = <list_of_ids>,
... )
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
embedding_function (Optional[Embeddings], optional) – Embedding function
to use to convert the text into embeddings.
**kwargs (Any) – Any additional keyword arguments passed is not supported
by this method.
Returns
List of IDs of the added texts.
Return type
List[str]
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → VST¶
Return VectorStore initialized from texts and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-5
|
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document]¶
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[Document]¶
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → bool[source]¶
Delete the entities in the dataset.
Parameters
ids (Optional[List[str]], optional) – The document_ids to delete.
Defaults to None.
**kwargs – Other keyword arguments that subclasses might use.
- filter (Optional[Dict[str, str]], optional): The filter to delete by.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-6
|
- filter (Optional[Dict[str, str]], optional): The filter to delete by.
- delete_all (Optional[bool], optional): Whether to drop the dataset.
Returns
Whether the delete operation was successful.
Return type
bool
delete_dataset() → None[source]¶
Delete the collection.
ds() → Any[source]¶
classmethod force_delete_by_path(path: str) → None[source]¶
Force delete dataset by path.
Parameters
path (str) – path of the dataset to delete.
Raises
ValueError – if deeplake is not installed.
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) → DeepLake[source]¶
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Examples:
>>> # Search using an embedding
>>> vector_store = DeepLake.from_texts(
… texts = <the_texts_that_you_want_to_embed>,
… embedding_function = <embedding_function_for_query>,
… k = <number_of_items_to_return>,
… exec_option = <preferred_exec_option>,
… )
Parameters
dataset_path (str) –
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use ‘activeloop login’ from command line)
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-7
|
(use ‘activeloop login’ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
texts (List[Document]) – List of documents to add.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
Note, in other places, it is called embedding_function.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
**kwargs – Additional keyword arguments.
Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Return docs selected using maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Examples:
>>> # Search using an embedding
>>> data = vector_store.max_marginal_relevance_search(
… query = <query_to_search>,
… embedding_function = <embedding_function_for_query>,
… k = <number_of_items_to_return>,
… exec_option = <preferred_exec_option>,
… )
Parameters
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-8
|
… exec_option = <preferred_exec_option>,
… )
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents for MMR algorithm.
lambda_mult – Value between 0 and 1. 0 corresponds
to maximum diversity and 1 to minimum.
Defaults to 0.5.
exec_option (str) – Supports 3 ways to perform searching.
- “python” - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database. To store
datasets in this database, specify
runtime = {“db_engine”: True} during dataset creation.
**kwargs – Additional keyword arguments
Returns
List of Documents selected by maximal marginal relevance.
Raises
ValueError – when MRR search is on but embedding function is
not specified.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, exec_option: Optional[str] = None, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance. Maximal marginal
relevance optimizes for similarity to query AND diversity among selected docs.
Examples:
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-9
|
relevance optimizes for similarity to query AND diversity among selected docs.
Examples:
>>> data = vector_store.max_marginal_relevance_search_by_vector(
… embedding=<your_embedding>,
… fetch_k=<elements_to_fetch_before_mmr_search>,
… k=<number_of_items_to_return>,
… exec_option=<preferred_exec_option>,
… )
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch for MMR algorithm.
lambda_mult – Number between 0 and 1 determining the degree of diversity.
0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.
exec_option (str) – DeepLakeVectorStore supports 3 ways for searching.
Could be “python”, “compute_engine” or “tensor_db”. Defaults to
“python”.
- “python” - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {“db_engine”: True}
during dataset creation.
**kwargs – Additional keyword arguments.
Returns
List[Documents] - A list of documents.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-10
|
Return docs most similar to query using specified search type.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search(
... query=<your_query>,
... k=<num_items>,
... exec_option=<preferred_exec_option>,
... )
>>> # Run tql search:
>>> data = vector_store.similarity_search(
... query=None,
... tql="SELECT * WHERE id == <id>",
... exec_option="compute_engine",
... )
Parameters
k (int) – Number of Documents to return. Defaults to 4.
query (str) – Text to look up similar documents.
**kwargs – Additional keyword arguments include:
embedding (Callable): Embedding function to use. Defaults to None.
distance_metric (str): ‘L2’ for Euclidean, ‘L1’ for Nuclear, ‘max’
for L-infinity, ‘cos’ for cosine, ‘dot’ for dot product.
Defaults to ‘L2’.
filter (Union[Dict, Callable], optional): Additional filterbefore embedding search.
- Dict: Key-value search on tensors of htype json,
(sample must satisfy all key-value filters)
Dict = {“tensor_1”: {“key”: value}, “tensor_2”: {“key”: value}}
Function: Compatible with deeplake.filter.
Defaults to None.
exec_option (str): Supports 3 ways to perform searching.’python’, ‘compute_engine’, or ‘tensor_db’. Defaults to ‘python’.
- ‘python’: Pure-python implementation for the client.
WARNING: not recommended for big datasets.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-11
|
WARNING: not recommended for big datasets.
’compute_engine’: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets.
’tensor_db’: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database.
Use runtime = {“db_engine”: True} during dataset creation.
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_by_vector(embedding: Union[List[float], ndarray], k: int = 4, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search_by_vector(
... embedding=<your_embedding>,
... k=<num_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Parameters
embedding (Union[List[float], np.ndarray]) – Embedding to find similar docs.
k (int) – Number of Documents to return. Defaults to 4.
**kwargs – Additional keyword arguments including:
filter (Union[Dict, Callable], optional):
Additional filter before embedding search.
- Dict - Key-value search on tensors of htype json. True
if all key-value filters are satisfied.
Dict = {“tensor_name_1”: {“key”: value},
”tensor_name_2”: {“key”: value}}
Function - Any function compatible withdeeplake.filter.
Defaults to None.
exec_option (str): Options for search execution include”python”, “compute_engine”, or “tensor_db”. Defaults to
“python”.
- “python” - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-12
|
option with big datasets is discouraged due to potential
memory issues.
”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database.
To store datasets in this database, specify
runtime = {“db_engine”: True} during dataset creation.
distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity,
‘dot’ for dot product. Defaults to L2.
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Run similarity search with Deep Lake with distance returned.
Examples:
>>> data = vector_store.similarity_search_with_score(
… query=<your_query>,
… embedding=<your_embedding_function>
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-13
|
… query=<your_query>,
… embedding=<your_embedding_function>
… k=<number_of_items_to_return>,
… exec_option=<preferred_exec_option>,
… )
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
**kwargs – Additional keyword arguments. Some of these arguments are:
distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, ‘dot’ for dot product.
Defaults to L2.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults
to None.
exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either “python”, “compute_engine” or
“tensor_db”. Defaults to “python”.
- “python” - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {“db_engine”: True}
during dataset creation.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
f3ed46af16d1-14
|
property embeddings: Optional[langchain.embeddings.base.Embeddings]¶
Access the query embedding object if available.
Examples using DeepLake¶
Deep Lake
Activeloop’s Deep Lake
Question answering over a group chat messages using Activeloop’s DeepLake
QA using Activeloop’s DeepLake
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
DeepLake self-querying
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.deeplake.DeepLake.html
|
25b5f63d7bc4-0
|
langchain.vectorstores.qdrant.Qdrant¶
class langchain.vectorstores.qdrant.Qdrant(client: Any, collection_name: str, embeddings: Optional[Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', distance_strategy: str = 'COSINE', vector_name: Optional[str] = None, embedding_function: Optional[Callable] = None)[source]¶
Bases: VectorStore
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
Initialize with necessary components.
Methods
__init__(client, collection_name[, ...])
Initialize with necessary components.
aadd_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
aadd_texts(texts[, metadatas, ids, batch_size])
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents, **kwargs)
Run more documents through the embeddings and add to the vectorstore.
add_texts(texts[, metadatas, ids, batch_size])
Run more texts through the embeddings and add to the vectorstore.
afrom_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
afrom_texts(texts, embedding[, metadatas, ...])
Construct Qdrant wrapper from a list of texts.
amax_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-1
|
Return docs selected using the maximal marginal relevance.
amax_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
amax_marginal_relevance_search_with_score_by_vector(...)
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
as_retriever(**kwargs)
asearch(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
asimilarity_search(query[, k, filter])
Return docs most similar to query.
asimilarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
asimilarity_search_with_relevance_scores(query)
Return docs most similar to query.
asimilarity_search_with_score(query[, k, ...])
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-2
|
Return docs most similar to query.
asimilarity_search_with_score_by_vector(...)
Return docs most similar to embedding vector.
delete([ids])
Delete by vector ID or other criteria.
from_documents(documents, embedding, **kwargs)
Return VectorStore initialized from documents and embeddings.
from_texts(texts, embedding[, metadatas, ...])
Construct Qdrant wrapper from a list of texts.
max_marginal_relevance_search(query[, k, ...])
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_by_vector(...)
Return docs selected using the maximal marginal relevance.
max_marginal_relevance_search_with_score_by_vector(...)
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5.
search(query, search_type, **kwargs)
Return docs most similar to query using specified search type.
similarity_search(query[, k, filter, ...])
Return docs most similar to query.
similarity_search_by_vector(embedding[, k, ...])
Return docs most similar to embedding vector.
similarity_search_with_relevance_scores(query)
Return docs and relevance scores in the range [0, 1].
similarity_search_with_score(query[, k, ...])
Return docs most similar to query.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-3
|
Return docs most similar to query.
similarity_search_with_score_by_vector(embedding)
Return docs most similar to embedding vector.
Attributes
CONTENT_KEY
METADATA_KEY
VECTOR_NAME
embeddings
Access the query embedding object if available.
async aadd_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) → List[str][source]¶
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size – How many vectors upload per-request.
Default: 64
Returns
List of ids from adding the texts into the vectorstore.
add_documents(documents: List[Document], **kwargs: Any) → List[str]¶
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) → List[str][source]¶
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-4
|
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size – How many vectors upload per-request.
Default: 64
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-5
|
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, force_recreate: bool = False, **kwargs: Any) → Qdrant[source]¶
Construct Qdrant wrapper from a list of texts.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-6
|
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location – If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
Default: False
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
path – Path in which the vectors will be stored while using local mode.
Default: None
collection_name – Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
Default: “Cosine”
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-7
|
Default: “Cosine”
content_payload_key – A payload key used to store the content of the document.
Default: “page_content”
metadata_payload_key – A payload key used to store the metadata of the document.
Default: “metadata”
vector_name – Name of the vector to be used internally in Qdrant.
Default: None
batch_size – How many vectors upload per-request.
Default: 64
shard_number – Number of shards in collection. Default is 1, minimum is 1.
replication_factor – Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider
it successful. Increasing this number will make the collection more
resilient to inconsistencies, but will also make it fail if not enough
replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload – If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are
indexed - remain in RAM.
hnsw_config – Params for HNSW index
optimizers_config – Params for optimizer
wal_config – Params for Write-Ahead-Log
quantization_config – Params for quantization, if None - quantization will be disabled
init_from – Use data stored in another collection to initialize this collection
force_recreate – Force recreating the collection
**kwargs – Additional arguments passed directly into REST client initialization
This is a user-friendly interface that:
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-8
|
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = await Qdrant.afrom_texts(texts, embeddings, "localhost")
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-9
|
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
Parameters
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance and distance for
each.
async amax_marginal_relevance_search_with_score_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
Parameters
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance and distance for
each.
as_retriever(**kwargs: Any) → VectorStoreRetriever¶
async asearch(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-10
|
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param filter: Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Document][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding vector to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
search_params – Additional search params
offset – Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold – Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency – Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-11
|
- int - number of replicas to query, values should present in all
queried replicas
’majority’ - query all replicas, but return values present in themajority of replicas
’quorum’ - query the majority of replicas, return values present inall of them
’all’ - query all replicas, and return values present in all replicas
Returns
List of Documents most similar to the query.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[Document, float]]¶
Return docs most similar to query.
async asimilarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
search_params – Additional search params
offset – Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold – Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency – Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-12
|
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
’majority’ - query all replicas, but return values present in themajority of replicas
’quorum’ - query the majority of replicas, return values present inall of them
’all’ - query all replicas, and return values present in all replicas
Returns
List of documents most similar to the query text and distance for each.
async asimilarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs most similar to embedding vector.
Parameters
embedding – Embedding vector to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
search_params – Additional search params
offset – Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold – Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency – Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-13
|
- int - number of replicas to query, values should present in all
queried replicas
’majority’ - query all replicas, but return values present in themajority of replicas
’quorum’ - query the majority of replicas, return values present inall of them
’all’ - query all replicas, and return values present in all replicas
Returns
List of documents most similar to the query text and distance for each.
delete(ids: Optional[List[str]] = None, **kwargs: Any) → Optional[bool][source]¶
Delete by vector ID or other criteria.
Parameters
ids – List of ids to delete.
**kwargs – Other keyword arguments that subclasses might use.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
classmethod from_documents(documents: List[Document], embedding: Embeddings, **kwargs: Any) → VST¶
Return VectorStore initialized from documents and embeddings.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-14
|
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', vector_name: Optional[str] = None, batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, force_recreate: bool = False, **kwargs: Any) → Qdrant[source]¶
Construct Qdrant wrapper from a list of texts.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-15
|
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location – If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
Default: False
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
path – Path in which the vectors will be stored while using local mode.
Default: None
collection_name – Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
Default: “Cosine”
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-16
|
Default: “Cosine”
content_payload_key – A payload key used to store the content of the document.
Default: “page_content”
metadata_payload_key – A payload key used to store the metadata of the document.
Default: “metadata”
vector_name – Name of the vector to be used internally in Qdrant.
Default: None
batch_size – How many vectors upload per-request.
Default: 64
shard_number – Number of shards in collection. Default is 1, minimum is 1.
replication_factor – Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider
it successful. Increasing this number will make the collection more
resilient to inconsistencies, but will also make it fail if not enough
replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload – If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are
indexed - remain in RAM.
hnsw_config – Params for HNSW index
optimizers_config – Params for optimizer
wal_config – Params for Write-Ahead-Log
quantization_config – Params for quantization, if None - quantization will be disabled
init_from – Use data stored in another collection to initialize this collection
force_recreate – Force recreating the collection
**kwargs – Additional arguments passed directly into REST client initialization
This is a user-friendly interface that:
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-17
|
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Document][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
25b5f63d7bc4-18
|
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_with_score_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[Tuple[Document, float]][source]¶
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
Parameters
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance and distance for
each.
search(query: str, search_type: str, **kwargs: Any) → List[Document]¶
Return docs most similar to query using specified search type.
|
https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.qdrant.Qdrant.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.