id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
32f9d2501d32-27
Parameters file_path (str) – path of the local file to store the messages. property messages: List[langchain.schema.BaseMessage] Retrieve the messages from the local file add_message(message)[source] Append the message to the record in the local file Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from the local file Return type None class langchain.memory.InMemoryEntityStore(*, store={})[source] Bases: langchain.memory.entity.BaseEntityStore Basic in-memory entity store. Parameters store (Dict[str, Optional[str]]) – Return type None attribute store: Dict[str, Optional[str]] = {} clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None class langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history cache that uses Momento as a backend. See https://gomomento.com/
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-28
See https://gomomento.com/ Parameters session_id (str) – cache_client (momento.CacheClient) – cache_name (str) – key_prefix (str) – ttl (Optional[timedelta]) – ensure_cache_exists (bool) – classmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source] Construct cache from CacheClient parameters. Parameters session_id (str) – cache_name (str) – ttl (timedelta) – configuration (Optional[momento.config.Configuration]) – auth_token (Optional[str]) – kwargs (Any) – Return type MomentoChatMessageHistory property messages: list[langchain.schema.BaseMessage] Retrieve the messages from Momento. Raises SdkException – Momento service or network error Exception – Unexpected response Returns List of cached messages Return type list[BaseMessage] add_message(message)[source] Store a message in the cache. Parameters message (BaseMessage) – The message object to store. Raises SdkException – Momento service or network error. Exception – Unexpected response. Return type None clear()[source] Remove the session’s messages from the cache. Raises SdkException – Momento service or network error. Exception – Unexpected response. Return type None class langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history that stores history in MongoDB. Parameters connection_string (str) – connection string to connect to MongoDB session_id (str) – arbitrary key that is used to store the messages
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-29
session_id (str) – arbitrary key that is used to store the messages of a single chat session. database_name (str) – name of the database to use collection_name (str) – name of the collection to use property messages: List[langchain.schema.BaseMessage] Retrieve the messages from MongoDB add_message(message)[source] Append the message to the record in MongoDB Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from MongoDB Return type None class langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – url (str) – session_id (str) – context (Optional[str]) – api_key (Optional[str]) – client_id (Optional[str]) – timeout (int) – memory_key (str) – Return type None attribute api_key: Optional[str] = None attribute client_id: Optional[str] = None attribute context: Optional[str] = None attribute session_id: str [Required] attribute url: str = 'https://api.getmetal.io/v1/motorhead' delete_session()[source] Delete a session Return type None async init()[source]
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-30
Delete a session Return type None async init()[source] Return type None load_memory_variables(values)[source] Return key-value pairs given the text input to the chain. If None, return all memories Parameters values (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Input keys this memory class will load dynamically. class langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in a Postgres database. Parameters session_id (str) – connection_string (str) – table_name (str) – property messages: List[langchain.schema.BaseMessage] Retrieve the messages from PostgreSQL add_message(message)[source] Append the message to the record in PostgreSQL Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from PostgreSQL Return type None class langchain.memory.ReadOnlySharedMemory(*, memory)[source] Bases: langchain.schema.BaseMemory A memory wrapper that is read-only and cannot be changed. Parameters memory (langchain.schema.BaseMemory) – Return type None attribute memory: langchain.schema.BaseMemory [Required] clear()[source] Nothing to clear, got a memory like a vault. Return type None
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-31
Nothing to clear, got a memory like a vault. Return type None load_memory_variables(inputs)[source] Load memory variables from memory. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Nothing should be saved or changed Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Return memory variables. class langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in a Redis database. Parameters session_id (str) – url (str) – key_prefix (str) – ttl (Optional[int]) – property key: str Construct the record key to use property messages: List[langchain.schema.BaseMessage] Retrieve the messages from Redis add_message(message)[source] Append the message to the record in Redis Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from Redis Return type None class langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source] Bases: langchain.memory.entity.BaseEntityStore Redis-backed Entity store. Entities get a TTL of 1 day by default, and that TTL is extended by 3 days every time the entity is read back. Parameters
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-32
that TTL is extended by 3 days every time the entity is read back. Parameters session_id (str) – url (str) – key_prefix (str) – ttl (Optional[int]) – recall_ttl (Optional[int]) – args (Any) – redis_client (Any) – Return type None attribute key_prefix: str = 'memory_store' attribute recall_ttl: Optional[int] = 259200 attribute redis_client: Any = None attribute session_id: str = 'default' attribute ttl: Optional[int] = 86400 clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None property full_key_prefix: str class langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in an SQL database. Parameters session_id (str) – connection_string (str) – table_name (str) – property messages: List[langchain.schema.BaseMessage]
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-33
property messages: List[langchain.schema.BaseMessage] Retrieve all messages from db add_message(message)[source] Append the message to the record in db Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from db Return type None class langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source] Bases: langchain.memory.entity.BaseEntityStore SQLite-backed Entity store Parameters session_id (str) – db_file (str) – table_name (str) – args (Any) – Return type None attribute session_id: str = 'default' attribute table_name: str = 'memory_store' clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None property full_table_name: str class langchain.memory.SimpleMemory(*, memories={})[source] Bases: langchain.schema.BaseMemory Simple memory for storing context or other bits of information that shouldn’t ever change between prompts.
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-34
Simple memory for storing context or other bits of information that shouldn’t ever change between prompts. Parameters memories (Dict[str, Any]) – Return type None attribute memories: Dict[str, Any] = {} clear()[source] Nothing to clear, got a memory like a vault. Return type None load_memory_variables(inputs)[source] Return key-value pairs given the text input to the chain. If None, return all memories Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Nothing should be saved or changed, my memory is set in stone. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Input keys this memory class will load dynamically. class langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source] Bases: langchain.schema.BaseMemory Class for a VectorStore-backed memory object. Parameters retriever (langchain.vectorstores.base.VectorStoreRetriever) – memory_key (str) – input_key (Optional[str]) – return_docs (bool) – Return type None attribute input_key: Optional[str] = None Key name to index the inputs to load_memory_variables. attribute memory_key: str = 'history' Key name to locate the memories in the result of load_memory_variables. attribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required] VectorStoreRetriever object to connect to. attribute return_docs: bool = False
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-35
attribute return_docs: bool = False Whether or not to return the result of querying the database directly. clear()[source] Nothing to clear. Return type None load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Union[List[langchain.schema.Document], str]] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] The list of keys emitted from the load_memory_variables method. class langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source] Bases: langchain.schema.BaseChatMessageHistory A ChatMessageHistory implementation that uses Zep as a backend. Recommended usage: # Set up Zep Chat History zep_chat_history = ZepChatMessageHistory( session_id=session_id, url=ZEP_API_URL, ) # Use a standard ConversationBufferMemory to encapsulate the Zep chat history memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=zep_chat_history ) Zep provides long-term conversation storage for LLM apps. The server stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. For server installation instructions and more, see: https://getzep.github.io/ This class is a thin wrapper around the zep-python package. Additional Zep functionality is exposed via the zep_summary and zep_messages properties. For more information on the zep-python package, see:
https://api.python.langchain.com/en/latest/modules/memory.html
32f9d2501d32-36
properties. For more information on the zep-python package, see: https://github.com/getzep/zep-python Parameters session_id (str) – url (str) – Return type None property messages: List[langchain.schema.BaseMessage] Retrieve messages from Zep memory property zep_messages: List[Message] Retrieve summary from Zep memory property zep_summary: Optional[str] Retrieve summary from Zep memory add_message(message)[source] Append the message to the Zep memory history Parameters message (langchain.schema.BaseMessage) – Return type None search(query, metadata=None, limit=None)[source] Search Zep memory for messages matching the query Parameters query (str) – metadata (Optional[Dict]) – limit (Optional[int]) – Return type List[MemorySearchResult] clear()[source] Clear session memory from Zep. Note that Zep is long-term storage for memory and this is not advised unless you have specific data retention requirements. Return type None
https://api.python.langchain.com/en/latest/modules/memory.html
81b746c66a31-0
Vector Stores Wrappers on top of vector stores. class langchain.vectorstores.AlibabaCloudOpenSearch(embedding, config, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Alibaba Cloud OpenSearch Vector Store Parameters embedding (langchain.embeddings.base.Embeddings) – config (langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings) – kwargs (Any) – Return type None add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, search_filter=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – search_filter (Optional[Dict[str, Any]]) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_relevance_scores(query, k=4, search_filter=None, **kwargs)[source] Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query (str) – input text k (int) – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-1
score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs search_filter (Optional[dict]) – kwargs (Any) – Returns List of Tuples of (doc, similarity_score) Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding, k=4, search_filter=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_filter (Optional[dict]) – kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] inner_embedding_query(embedding, search_filter=None, k=4)[source] Parameters embedding (List[float]) – search_filter (Optional[Dict[str, Any]]) – k (int) – Return type Dict[str, Any] create_results(json_result)[source] Parameters json_result (Dict[str, Any]) – Return type List[langchain.schema.Document] create_results_with_score(json_result)[source] Parameters json_result (Dict[str, Any]) – Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, config=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-2
metadatas (Optional[List[dict]]) – config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) – kwargs (Any) – Return type langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch classmethod from_documents(documents, embedding, ids=None, config=None, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – ids (Optional[List[str]]) – config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) – kwargs (Any) – Return type langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch class langchain.vectorstores.AlibabaCloudOpenSearchSettings(endpoint, instance_id, username, password, datasource_name, embedding_index_name, field_name_mapping)[source] Bases: object Opensearch Client Configuration Attribute: endpoint (str) : The endpoint of opensearch instance, You can find it from the console of Alibaba Cloud OpenSearch. instance_id (str) : The identify of opensearch instance, You can find it from the console of Alibaba Cloud OpenSearch. datasource_name (str): The name of the data source specified when creating it. username (str) : The username specified when purchasing the instance. password (str) : The password specified when purchasing the instance. embedding_index_name (str) : The name of the vector attribute specified when configuring the instance attributes. field_name_mapping (Dict) : Using field name mapping between opensearch vector store and opensearch instance configuration table field names: {
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-3
vector store and opensearch instance configuration table field names: { β€˜id’: β€˜The id field name map of index document.’, β€˜document’: β€˜The text field name map of index document.’, β€˜embedding’: β€˜In the embedding field of the opensearch instance, the values must be in float16 multivalue type and separated by commas.’, β€˜metadata_field_x’: β€˜Metadata field mapping includes the mapped field name and operator in the mapping value, separated by a comma between the mapped field name and the operator.’, } Parameters endpoint (str) – instance_id (str) – username (str) – password (str) – datasource_name (str) – embedding_index_name (str) – field_name_mapping (Dict[str, str]) – Return type None endpoint: str instance_id: str username: str password: str datasource_name: str embedding_index_name: str field_name_mapping: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata_field_x': 'metadata_field_x,operator'} class langchain.vectorstores.AnalyticDB(connection_string, embedding_function, embedding_dimension=1536, collection_name='langchain_document', pre_delete_collection=False, logger=None)[source] Bases: langchain.vectorstores.base.VectorStore VectorStore implementation using AnalyticDB. AnalyticDB is a distributed full PostgresSQL syntax cloud-native database. - connection_string is a postgres connection string. - embedding_function any embedding function implementing langchain.embeddings.base.Embeddings interface. collection_name is the name of the collection to use. (default: langchain)
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-4
collection_name is the name of the collection to use. (default: langchain) NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. pre_delete_collection if True, will delete the collection if it exists.(default: False) - Useful for testing. Parameters connection_string (str) – embedding_function (Embeddings) – embedding_dimension (int) – collection_name (str) – pre_delete_collection (bool) – logger (Optional[logging.Logger]) – Return type None create_table_if_not_exists()[source] Return type None create_collection()[source] Return type None delete_collection()[source] Return type None add_texts(texts, metadatas=None, ids=None, batch_size=500, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – batch_size (int) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Run similarity search with AnalyticDB with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-5
k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source] Parameters embedding (List[float]) – k (int) – filter (Optional[dict]) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-6
Return VectorStore initialized from texts and embeddings. Postgres Connection string is required Either pass it as a parameter or set the PG_CONNECTION_STRING environment variable. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – embedding_dimension (int) – collection_name (str) – ids (Optional[List[str]]) – pre_delete_collection (bool) – kwargs (Any) – Return type langchain.vectorstores.analyticdb.AnalyticDB classmethod get_connection_string(kwargs)[source] Parameters kwargs (Dict[str, Any]) – Return type str classmethod from_documents(documents, embedding, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Postgres Connection string is required Either pass it as a parameter or set the PG_CONNECTION_STRING environment variable. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – embedding_dimension (int) – collection_name (str) – ids (Optional[List[str]]) – pre_delete_collection (bool) – kwargs (Any) – Return type langchain.vectorstores.analyticdb.AnalyticDB classmethod connection_string_from_db_params(driver, host, port, database, user, password)[source] Return connection string from database parameters. Parameters driver (str) – host (str) – port (int) – database (str) – user (str) – password (str) – Return type str
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-7
user (str) – password (str) – Return type str class langchain.vectorstores.Annoy(embedding_function, index, metric, docstore, index_to_docstore_id)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Annoy vector database. To use, you should have the annoy python package installed. Example from langchain import Annoy db = Annoy(embedding_function, index, docstore, index_to_docstore_id) Parameters embedding_function (Callable) – index (Any) – metric (str) – docstore (Docstore) – index_to_docstore_id (Dict[int, str]) – add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] process_index_results(idxs, dists)[source] Turns annoy results into a list of documents and scores. Parameters idxs (List[int]) – List of indices of the documents in the index. dists (List[float]) – List of distances of the documents in the index. Returns List of Documents and scores. Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score_by_vector(embedding, k=4, search_k=- 1)[source] Return docs most similar to query. Parameters query – Text to look up documents similar to.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-8
Parameters query – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided embedding (List[float]) – Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score_by_index(docstore_index, k=4, search_k=- 1)[source] Return docs most similar to query. Parameters query – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided docstore_index (int) – Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score(query, k=4, search_k=- 1)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding, k=4, search_k=- 1, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-9
Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided kwargs (Any) – Returns List of Documents most similar to the embedding. Return type List[langchain.schema.Document] similarity_search_by_index(docstore_index, k=4, search_k=- 1, **kwargs)[source] Return docs most similar to docstore_index. Parameters docstore_index (int) – Index of document in docstore k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided kwargs (Any) – Returns List of Documents most similar to the embedding. Return type List[langchain.schema.Document] similarity_search(query, k=4, search_k=- 1, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. search_k (int) – inspect up to search_k nodes which defaults to n_trees * n if not provided kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-10
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. k (int) – Number of Documents to return. Defaults to 4. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source] Construct Annoy wrapper from raw documents. Parameters texts (List[str]) – List of documents to index.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-11
Parameters texts (List[str]) – List of documents to index. embedding (langchain.embeddings.base.Embeddings) – Embedding function to use. metadatas (Optional[List[dict]]) – List of metadata dictionaries to associate with documents. metric (str) – Metric to use for indexing. Defaults to β€œangular”. trees (int) – Number of trees to use for indexing. Defaults to 100. n_jobs (int) – Number of jobs to use for indexing. Defaults to -1. kwargs (Any) – Return type langchain.vectorstores.annoy.Annoy This is a user friendly interface that: Embeds documents. Creates an in memory docstore Initializes the Annoy database This is intended to be a quick way to get started. Example from langchain import Annoy from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() index = Annoy.from_texts(texts, embeddings) classmethod from_embeddings(text_embeddings, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source] Construct Annoy wrapper from embeddings. Parameters text_embeddings (List[Tuple[str, List[float]]]) – List of tuples of (text, embedding) embedding (langchain.embeddings.base.Embeddings) – Embedding function to use. metadatas (Optional[List[dict]]) – List of metadata dictionaries to associate with documents. metric (str) – Metric to use for indexing. Defaults to β€œangular”. trees (int) – Number of trees to use for indexing. Defaults to 100. n_jobs (int) – Number of jobs to use for indexing. Defaults to -1 kwargs (Any) – Return type langchain.vectorstores.annoy.Annoy
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-12
Return type langchain.vectorstores.annoy.Annoy This is a user friendly interface that: Creates an in memory docstore with provided embeddings Initializes the Annoy database This is intended to be a quick way to get started. Example from langchain import Annoy from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) db = Annoy.from_embeddings(text_embedding_pairs, embeddings) save_local(folder_path, prefault=False)[source] Save Annoy index, docstore, and index_to_docstore_id to disk. Parameters folder_path (str) – folder path to save index, docstore, and index_to_docstore_id to. prefault (bool) – Whether to pre-load the index into memory. Return type None classmethod load_local(folder_path, embeddings)[source] Load Annoy index, docstore, and index_to_docstore_id to disk. Parameters folder_path (str) – folder path to load index, docstore, and index_to_docstore_id from. embeddings (langchain.embeddings.base.Embeddings) – Embeddings to use when generating queries. Return type langchain.vectorstores.annoy.Annoy class langchain.vectorstores.AtlasDB(name, embedding_function=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Atlas: Nomic’s neural database and rhizomatic instrument. To use, you should have the nomic python package installed. Example from langchain.vectorstores import AtlasDB
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-13
Example from langchain.vectorstores import AtlasDB from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = AtlasDB("my_project", embeddings.embed_query) Parameters name (str) – embedding_function (Optional[Embeddings]) – api_key (Optional[str]) – description (str) – is_public (bool) – reset_project_if_exists (bool) – Return type None add_texts(texts, metadatas=None, ids=None, refresh=True, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]]) – An optional list of ids. refresh (bool) – Whether or not to refresh indices with the updated data. Default True. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] create_index(**kwargs)[source] Creates an index in your project. See https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index for full detail. Parameters kwargs (Any) – Return type Any similarity_search(query, k=4, **kwargs)[source] Run similarity search with AtlasDB Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. kwargs (Any) – Returns List of documents most similar to the query text. Return type List[Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-14
List of documents most similar to the query text. Return type List[Document] classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, name=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source] Create an AtlasDB vectorstore from a raw documents. Parameters texts (List[str]) – The list of texts to ingest. name (str) – Name of the project to create. api_key (str) – Your nomic API key, embedding (Optional[Embeddings]) – Embedding function. Defaults to None. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – Optional list of document IDs. If None, ids will be auto created description (str) – A description for your project. is_public (bool) – Whether your project is publicly accessible. True by default. reset_project_if_exists (bool) – Whether to reset this project if it already exists. Default False. Generally userful during development and testing. index_kwargs (Optional[dict]) – Dict of kwargs for index creation. See https://docs.nomic.ai/atlas_api.html kwargs (Any) – Returns Nomic’s neural database and finest rhizomatic instrument Return type AtlasDB classmethod from_documents(documents, embedding=None, ids=None, name=None, api_key=None, persist_directory=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source] Create an AtlasDB vectorstore from a list of documents. Parameters name (str) – Name of the collection to create. api_key (str) – Your nomic API key,
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-15
api_key (str) – Your nomic API key, documents (List[Document]) – List of documents to add to the vectorstore. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. ids (Optional[List[str]]) – Optional list of document IDs. If None, ids will be auto created description (str) – A description for your project. is_public (bool) – Whether your project is publicly accessible. True by default. reset_project_if_exists (bool) – Whether to reset this project if it already exists. Default False. Generally userful during development and testing. index_kwargs (Optional[dict]) – Dict of kwargs for index creation. See https://docs.nomic.ai/atlas_api.html persist_directory (Optional[str]) – kwargs (Any) – Returns Nomic’s neural database and finest rhizomatic instrument Return type AtlasDB class langchain.vectorstores.AwaDB(table_name='langchain_awadb', embedding=None, log_and_data_dir=None, client=None)[source] Bases: langchain.vectorstores.base.VectorStore Interface implemented by AwaDB vector stores. Parameters table_name (str) – embedding (Optional[Embeddings]) – log_and_data_dir (Optional[str]) – client (Optional[awadb.Client]) – Return type None add_texts(texts, metadatas=None, is_duplicate_texts=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. :param texts: Iterable of strings to add to the vectorstore. :param metadatas: Optional list of metadatas associated with the texts. :param is_duplicate_texts: Optional whether to duplicate texts. :param kwargs: vectorstore specific parameters. Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-16
:param kwargs: vectorstore specific parameters. Returns List of ids from adding the texts into the vectorstore. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – is_duplicate_texts (Optional[bool]) – kwargs (Any) – Return type List[str] load_local(table_name, **kwargs)[source] Parameters table_name (str) – kwargs (Any) – Return type bool similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, **kwargs)[source] Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding=None, k=4, scores=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (Optional[List[float]]) – Embedding to look up documents similar to.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-17
Parameters embedding (Optional[List[float]]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. scores (Optional[list]) – kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] create_table(table_name, **kwargs)[source] Create a new table. Parameters table_name (str) – kwargs (Any) – Return type bool use(table_name, **kwargs)[source] Use the specified table. Don’t know the tables, please invoke list_tables. Parameters table_name (str) – kwargs (Any) – Return type bool list_tables(**kwargs)[source] List all the tables created by the client. Parameters kwargs (Any) – Return type List[str] get_current_table(**kwargs)[source] Get the current table. Parameters kwargs (Any) – Return type str classmethod from_texts(texts, embedding=None, metadatas=None, table_name='langchain_awadb', log_and_data_dir=None, client=None, **kwargs)[source] Create an AwaDB vectorstore from a raw documents. Parameters texts (List[str]) – List of texts to add to the table. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. table_name (str) – Name of the table to create. log_and_data_dir (Optional[str]) – Directory of logging and persistence. client (Optional[awadb.Client]) – AwaDB client kwargs (Any) – Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-18
kwargs (Any) – Returns AwaDB vectorstore. Return type AwaDB classmethod from_documents(documents, embedding=None, table_name='langchain_awadb', log_and_data_dir=None, client=None, **kwargs)[source] Create an AwaDB vectorstore from a list of documents. If a log_and_data_dir specified, the table will be persisted there. Parameters documents (List[Document]) – List of documents to add to the vectorstore. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. table_name (str) – Name of the table to create. log_and_data_dir (Optional[str]) – Directory to persist the table. client (Optional[awadb.Client]) – AwaDB client kwargs (Any) – Returns AwaDB vectorstore. Return type AwaDB class langchain.vectorstores.AzureSearch(azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type='hybrid', semantic_configuration_name=None, semantic_query_language='en-us', **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Parameters azure_search_endpoint (str) – azure_search_key (str) – index_name (str) – embedding_function (Callable) – search_type (str) – semantic_configuration_name (Optional[str]) – semantic_query_language (str) – kwargs (Any) – add_texts(texts, metadatas=None, **kwargs)[source] Add texts data to an existing index. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] similarity_search(query, k=4, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-19
similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] vector_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] vector_search_with_score(query, k=4, filters=None)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filters (Optional[str]) – Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] hybrid_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] hybrid_search_with_score(query, k=4, filters=None)[source] Return docs most similar to query with an hybrid query. Parameters query (str) – Text to look up documents similar to.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-20
Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filters (Optional[str]) – Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] semantic_hybrid_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] semantic_hybrid_search_with_score(query, k=4, filters=None)[source] Return docs most similar to query with an hybrid query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filters (Optional[str]) – Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, azure_search_endpoint='', azure_search_key='', index_name='langchain-index', **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – azure_search_endpoint (str) – azure_search_key (str) – index_name (str) – kwargs (Any) – Return type langchain.vectorstores.azuresearch.AzureSearch
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-21
kwargs (Any) – Return type langchain.vectorstores.azuresearch.AzureSearch class langchain.vectorstores.Cassandra(embedding, session, keyspace, table_name, ttl_seconds=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Cassandra embeddings platform. There is no notion of a default table name, since each embedding function implies its own vector dimension, which is part of the schema. Example from langchain.vectorstores import Cassandra from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() session = ... keyspace = 'my_keyspace' vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive') Parameters embedding (Embeddings) – session (Session) – keyspace (str) – table_name (str) – ttl_seconds (int | None) – Return type None delete_collection()[source] Just an alias for clear (to better align with other VectorStore implementations). Return type None clear()[source] Empty the collection. Return type None delete_by_document_id(document_id)[source] Parameters document_id (str) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-22
Returns List of IDs of the added texts. Return type List[str] similarity_search_with_score_id_by_vector(embedding, k=4)[source] Return docs most similar to embedding vector. No support for filter query (on metadata) along with vector search. Parameters embedding (str) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of (Document, score, id), the most similar to the query vector. Return type List[Tuple[langchain.schema.Document, float, str]] similarity_search_with_score_id(query, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float, str]] similarity_search_with_score_by_vector(embedding, k=4)[source] Return docs most similar to embedding vector. No support for filter query (on metadata) along with vector search. Parameters embedding (str) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of (Document, score), the most similar to the query vector. Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-23
Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Returns List of Documents selected by maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-24
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Optional. Returns List of Documents selected by maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Create a Cassandra vectorstore from raw texts. No support for specifying text IDs Returns a Cassandra vectorstore. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.cassandra.CVST classmethod from_documents(documents, embedding, **kwargs)[source] Create a Cassandra vectorstore from a document list. No support for specifying text IDs Returns a Cassandra vectorstore. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.cassandra.CVST class langchain.vectorstores.Chroma(collection_name='langchain', embedding_function=None, persist_directory=None, client_settings=None, collection_metadata=None, client=None)[source] Bases: langchain.vectorstores.base.VectorStore
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-25
Bases: langchain.vectorstores.base.VectorStore Wrapper around ChromaDB embeddings platform. To use, you should have the chromadb python package installed. Example from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Chroma("langchain_store", embeddings) Parameters collection_name (str) – embedding_function (Optional[Embeddings]) – persist_directory (Optional[str]) – client_settings (Optional[chromadb.config.Settings]) – collection_metadata (Optional[Dict]) – client (Optional[chromadb.Client]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Run similarity search with Chroma. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of documents most similar to the query text. Return type List[Document] similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source] Return docs most similar to embedding vector.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-26
Return docs most similar to embedding vector. :param embedding: Embedding to look up documents similar to. :type embedding: str :param k: Number of Documents to return. Defaults to 4. :type k: int :param filter: Filter by metadata. Defaults to None. :type filter: Optional[Dict[str, str]] Returns List of Documents most similar to the query vector. Parameters embedding (List[float]) – k (int) – filter (Optional[Dict[str, str]]) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None, **kwargs)[source] Run similarity search with Chroma with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-27
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] delete_collection()[source] Delete the collection. Return type None get(ids=None, where=None, limit=None, offset=None, where_document=None, include=None)[source] Gets the collection. Parameters ids (Optional[OneOrMany[ID]]) – The ids of the embeddings to get. Optional. where (Optional[Where]) – A Where type dict used to filter results by.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-28
where (Optional[Where]) – A Where type dict used to filter results by. E.g. {β€œcolor” : β€œred”, β€œprice”: 4.20}. Optional. limit (Optional[int]) – The number of documents to return. Optional. offset (Optional[int]) – The offset to start returning results from. Useful for paging results with limit. Optional. where_document (Optional[WhereDocument]) – A WhereDocument type dict used to filter by the documents. E.g. {$contains: {β€œtext”: β€œhello”}}. Optional. include (Optional[List[str]]) – A list of what to include in the results. Can contain β€œembeddings”, β€œmetadatas”, β€œdocuments”. Ids are always included. Defaults to [β€œmetadatas”, β€œdocuments”]. Optional. Return type Dict[str, Any] persist()[source] Persist the collection. This can be used to explicitly persist the data to disk. It will also be called automatically when the object is destroyed. Return type None update_document(document_id, document)[source] Update a document in the collection. Parameters document_id (str) – ID of the document to update. document (Document) – Document to update. Return type None classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source] Create a Chroma vectorstore from a raw documents. If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Parameters texts (List[str]) – List of texts to add to the collection. collection_name (str) – Name of the collection to create.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-29
collection_name (str) – Name of the collection to create. persist_directory (Optional[str]) – Directory to persist the collection. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – List of document IDs. Defaults to None. client_settings (Optional[chromadb.config.Settings]) – Chroma client settings client (Optional[chromadb.Client]) – kwargs (Any) – Returns Chroma vectorstore. Return type Chroma classmethod from_documents(documents, embedding=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source] Create a Chroma vectorstore from a list of documents. If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Parameters collection_name (str) – Name of the collection to create. persist_directory (Optional[str]) – Directory to persist the collection. ids (Optional[List[str]]) – List of document IDs. Defaults to None. documents (List[Document]) – List of documents to add to the vectorstore. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. client_settings (Optional[chromadb.config.Settings]) – Chroma client settings client (Optional[chromadb.Client]) – kwargs (Any) – Returns Chroma vectorstore. Return type Chroma delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.Clickhouse(embedding, config=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-30
Bases: langchain.vectorstores.base.VectorStore Wrapper around ClickHouse vector database You need a clickhouse-connect python package, and a valid account to connect to ClickHouse. ClickHouse can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse) Parameters embedding (Embeddings) – config (Optional[ClickhouseSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Insert more texts through the embeddings and add to the VectorStore. Parameters texts (Iterable[str]) – Iterable of strings to add to the VectorStore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the VectorStore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create ClickHouse wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (ClickHouseSettings, Optional) – ClickHouse configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-31
Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to ClickHouse. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns ClickHouse Index Return type langchain.vectorstores.clickhouse.Clickhouse similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-32
of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str pydantic settings langchain.vectorstores.ClickhouseSettings[source] Bases: pydantic.env_settings.BaseSettings ClickHouse Client Configuration Attribute: clickhouse_host (str)An URL to connect to MyScale backend.Defaults to β€˜localhost’. clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443. username (str) : Username to login. Defaults to None. password (str) : Password to login. Defaults to None. index_type (str): index type string. index_param (list): index build parameter. index_query_params(dict): index query parameters.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-33
index_param (list): index build parameter. index_query_params(dict): index query parameters. database (str) : Database name to find the table. Defaults to β€˜default’. table (str) : Table name to operate on. Defaults to β€˜vector_table’. metric (str)Metric to compute distance,supported are (β€˜angular’, β€˜euclidean’, β€˜manhattan’, β€˜hamming’, β€˜dot’). Defaults to β€˜angular’. https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169 column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, must be same size to number of columns. For example: .. code-block:: python {β€˜id’: β€˜text_id’, β€˜uuid’: β€˜global_unique_id’ β€˜embedding’: β€˜text_embedding’, β€˜document’: β€˜text_plain’, β€˜metadata’: β€˜metadata_dictionary_in_json’, } Defaults to identity map. Show JSON schema{ "title": "ClickhouseSettings",
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-34
Show JSON schema{ "title": "ClickhouseSettings", "description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.", "type": "object", "properties": { "host": {
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-35
"type": "object", "properties": { "host": { "title": "Host", "default": "localhost", "env_names": "{'clickhouse_host'}", "type": "string" }, "port": { "title": "Port", "default": 8123, "env_names": "{'clickhouse_port'}", "type": "integer" }, "username": { "title": "Username", "env_names": "{'clickhouse_username'}", "type": "string" }, "password": { "title": "Password", "env_names": "{'clickhouse_password'}", "type": "string" }, "index_type": { "title": "Index Type", "default": "annoy", "env_names": "{'clickhouse_index_type'}", "type": "string" }, "index_param": { "title": "Index Param", "default": [ "'L2Distance'", 100 ], "env_names": "{'clickhouse_index_param'}", "anyOf": [ { "type": "array", "items": {} }, { "type": "object" } ] }, "index_query_params": { "title": "Index Query Params", "default": {}, "env_names": "{'clickhouse_index_query_params'}", "type": "object", "additionalProperties": { "type": "string" } },
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-36
"type": "string" } }, "column_map": { "title": "Column Map", "default": { "id": "id", "uuid": "uuid", "document": "document", "embedding": "embedding", "metadata": "metadata" }, "env_names": "{'clickhouse_column_map'}", "type": "object", "additionalProperties": { "type": "string" } }, "database": { "title": "Database", "default": "default", "env_names": "{'clickhouse_database'}", "type": "string" }, "table": { "title": "Table", "default": "langchain", "env_names": "{'clickhouse_table'}", "type": "string" }, "metric": { "title": "Metric", "default": "angular", "env_names": "{'clickhouse_metric'}", "type": "string" } }, "additionalProperties": false } Config env_file: str = .env env_file_encoding: str = utf-8 env_prefix: str = clickhouse_ Fields column_map (Dict[str, str]) database (str) host (str) index_param (Optional[Union[List, Dict]]) index_query_params (Dict[str, str]) index_type (str) metric (str) password (Optional[str]) port (int) table (str) username (Optional[str])
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-37
port (int) table (str) username (Optional[str]) attribute column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'} attribute database: str = 'default' attribute host: str = 'localhost' attribute index_param: Optional[Union[List, Dict]] = ["'L2Distance'", 100] attribute index_query_params: Dict[str, str] = {} attribute index_type: str = 'annoy' attribute metric: str = 'angular' attribute password: Optional[str] = None attribute port: int = 8123 attribute table: str = 'langchain' attribute username: Optional[str] = None class langchain.vectorstores.DeepLake(dataset_path='./deeplake/', token=None, embedding_function=None, read_only=False, ingestion_batch_size=1000, num_workers=0, verbose=True, exec_option='python', **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Deep Lake, a data lake for deep learning applications. We integrated deeplake’s similarity search and filtering for fast prototyping, Now, it supports Tensor Query Language (TQL) for production use cases over billion rows. Why Deep Lake? Not only stores embeddings, but also the original data with version control. Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.) More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models. To use, you should have the deeplake python package installed. Example
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-38
To use, you should have the deeplake python package installed. Example from langchain.vectorstores import DeepLake from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = DeepLake("langchain_store", embeddings.embed_query) Parameters dataset_path (str) – token (Optional[str]) – embedding_function (Optional[Embeddings]) – read_only (bool) – ingestion_batch_size (int) – num_workers (int) – verbose (bool) – exec_option (str) – kwargs (Any) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Examples >>> ids = deeplake_vectorstore.add_texts( ... texts = <list_of_texts>, ... metadatas = <list_of_metadata_jsons>, ... ids = <list_of_ids>, ... ) Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. **kwargs – other optional keyword arguments. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search( ... query=<your_query>, ... k=<num_items>, ... exec_option=<preferred_exec_option>, ... ) >>> # Run tql search:
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-39
... ) >>> # Run tql search: >>> data = vector_store.tql_search( ... tql_query="SELECT * WHERE id == <id>", ... exec_option="compute_engine", ... ) Parameters k (int) – Number of Documents to return. Defaults to 4. query (str) – Text to look up similar documents. **kwargs – Additional keyword arguments include: embedding (Callable): Embedding function to use. Defaults to None. distance_metric (str): β€˜L2’ for Euclidean, β€˜L1’ for Nuclear, β€˜max’ for L-infinity, β€˜cos’ for cosine, β€˜dot’ for dot product. Defaults to β€˜L2’. filter (Union[Dict, Callable], optional): Additional filterbefore embedding search. - Dict: Key-value search on tensors of htype json, (sample must satisfy all key-value filters) Dict = {β€œtensor_1”: {β€œkey”: value}, β€œtensor_2”: {β€œkey”: value}} Function: Compatible with deeplake.filter. Defaults to None. exec_option (str): Supports 3 ways to perform searching.’python’, β€˜compute_engine’, or β€˜tensor_db’. Defaults to β€˜python’. - β€˜python’: Pure-python implementation for the client. WARNING: not recommended for big datasets. ’compute_engine’: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets. ’tensor_db’: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database. Use runtime = {β€œdb_engine”: True} during dataset creation. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-40
similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search_by_vector( ... embedding=<your_embedding>, ... k=<num_items_to_return>, ... exec_option=<preferred_exec_option>, ... ) Parameters embedding (Union[List[float], np.ndarray]) – Embedding to find similar docs. k (int) – Number of Documents to return. Defaults to 4. **kwargs – Additional keyword arguments including: filter (Union[Dict, Callable], optional): Additional filter before embedding search. - Dict - Key-value search on tensors of htype json. True if all key-value filters are satisfied. Dict = {β€œtensor_name_1”: {β€œkey”: value}, ”tensor_name_2”: {β€œkey”: value}} Function - Any function compatible withdeeplake.filter. Defaults to None. exec_option (str): Options for search execution include”python”, β€œcompute_engine”, or β€œtensor_db”. Defaults to β€œpython”. - β€œpython” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {β€œdb_engine”: True} during dataset creation.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-41
runtime = {β€œdb_engine”: True} during dataset creation. distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity, β€˜dot’ for dot product. Defaults to L2. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_with_score(query, k=4, **kwargs)[source] Run similarity search with Deep Lake with distance returned. Examples: >>> data = vector_store.similarity_search_with_score( … query=<your_query>, … embedding=<your_embedding_function> … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. **kwargs – Additional keyword arguments. Some of these arguments are: distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity distance, cos for cosine similarity, β€˜dot’ for dot product. Defaults to L2. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults to None. exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either β€œpython”, β€œcompute_engine” or β€œtensor_db”. Defaults to β€œpython”. - β€œpython” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-42
any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {β€œdb_engine”: True} during dataset creation. kwargs (Any) – Returns List of documents most similar to the querytext with distance in float. Return type List[Tuple[Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected docs. Examples: >>> data = vector_store.max_marginal_relevance_search_by_vector( … embedding=<your_embedding>, … fetch_k=<elements_to_fetch_before_mmr_search>, … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch for MMR algorithm. lambda_mult (float) – Number between 0 and 1 determining the degree of diversity. 0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5. exec_option (str) – DeepLakeVectorStore supports 3 ways for searching. Could be β€œpython”, β€œcompute_engine” or β€œtensor_db”. Defaults to β€œpython”. - β€œpython” - Pure-python implementation running on the client.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-43
β€œpython”. - β€œpython” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {β€œdb_engine”: True} during dataset creation. **kwargs – Additional keyword arguments. kwargs (Any) – Returns List[Documents] - A list of documents. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source] Return docs selected using maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Examples: >>> # Search using an embedding >>> data = vector_store.max_marginal_relevance_search( … query = <query_to_search>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>, … ) Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents for MMR algorithm. lambda_mult (float) – Value between 0 and 1. 0 corresponds
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-44
lambda_mult (float) – Value between 0 and 1. 0 corresponds to maximum diversity and 1 to minimum. Defaults to 0.5. exec_option (str) – Supports 3 ways to perform searching. - β€œpython” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {β€œdb_engine”: True} during dataset creation. **kwargs – Additional keyword arguments kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Raises ValueError – when MRR search is on but embedding function is not specified. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, dataset_path='./deeplake/', **kwargs)[source] Create a Deep Lake dataset from a raw documents. If a dataset_path is specified, the dataset will be persisted in that location, otherwise by default at ./deeplake Examples: >>> # Search using an embedding >>> vector_store = DeepLake.from_texts( … texts = <the_texts_that_you_want_to_embed>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>,
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-45
… exec_option = <preferred_exec_option>, … ) Parameters dataset_path (str) – The full path to the dataset. Can be: Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use β€˜activeloop login’ from command line) AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required in either the environment Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset. In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead. Should be used only for testing as it does not persist. texts (List[Document]) – List of documents to add. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. Note, in other places, it is called embedding_function. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – List of document IDs. Defaults to None. **kwargs – Additional keyword arguments. kwargs (Any) – Returns Deep Lake dataset. Return type DeepLake Raises ValueError – If β€˜embedding’ is provided in kwargs. This is deprecated, please use embedding_function instead. delete(ids=None, filter=None, delete_all=None)[source] Delete the entities in the dataset. Parameters ids (Optional[List[str]], optional) – The document_ids to delete. Defaults to None. filter (Optional[Dict[str, str]], optional) – The filter to delete by.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-46
filter (Optional[Dict[str, str]], optional) – The filter to delete by. Defaults to None. delete_all (Optional[bool], optional) – Whether to drop the dataset. Defaults to None. Returns Whether the delete operation was successful. Return type bool classmethod force_delete_by_path(path)[source] Force delete dataset by path. Parameters path (str) – path of the dataset to delete. Raises ValueError – if deeplake is not installed. Return type None delete_dataset()[source] Delete the collection. Return type None class langchain.vectorstores.DocArrayHnswSearch(doc_index, embedding)[source] Bases: langchain.vectorstores.docarray.base.DocArrayIndex Wrapper around HnswLib storage. To use it, you should have the docarray package with version >=0.32.0 installed. You can install it with pip install β€œlangchain[docarray]”. Parameters doc_index (BaseDocIndex) – embedding (langchain.embeddings.base.Embeddings) – classmethod from_params(embedding, work_dir, n_dim, dist_metric='cosine', max_elements=1024, index=True, ef_construction=200, ef=10, M=16, allow_replace_deleted=True, num_threads=1, **kwargs)[source] Initialize DocArrayHnswSearch store. Parameters embedding (Embeddings) – Embedding function. work_dir (str) – path to the location where all the data will be stored. n_dim (int) – dimension of an embedding. dist_metric (str) – Distance metric for DocArrayHnswSearch can be one of: β€œcosine”, β€œip”, and β€œl2”. Defaults to β€œcosine”.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-47
β€œcosine”, β€œip”, and β€œl2”. Defaults to β€œcosine”. max_elements (int) – Maximum number of vectors that can be stored. Defaults to 1024. index (bool) – Whether an index should be built for this field. Defaults to True. ef_construction (int) – defines a construction time/accuracy trade-off. Defaults to 200. ef (int) – parameter controlling query time/accuracy trade-off. Defaults to 10. M (int) – parameter that defines the maximum number of outgoing connections in the graph. Defaults to 16. allow_replace_deleted (bool) – Enables replacing of deleted elements with new added ones. Defaults to True. num_threads (int) – Sets the number of cpu threads to use. Defaults to 1. **kwargs – Other keyword arguments to be passed to the get_doc_cls method. kwargs (Any) – Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch classmethod from_texts(texts, embedding, metadatas=None, work_dir=None, n_dim=None, **kwargs)[source] Create an DocArrayHnswSearch store and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. work_dir (str) – path to the location where all the data will be stored. n_dim (int) – dimension of an embedding. **kwargs – Other keyword arguments to be passed to the __init__ method. kwargs (Any) – Returns DocArrayHnswSearch Vector Store Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-48
Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch class langchain.vectorstores.DocArrayInMemorySearch(doc_index, embedding)[source] Bases: langchain.vectorstores.docarray.base.DocArrayIndex Wrapper around in-memory storage for exact search. To use it, you should have the docarray package with version >=0.32.0 installed. You can install it with pip install β€œlangchain[docarray]”. Parameters doc_index (BaseDocIndex) – embedding (langchain.embeddings.base.Embeddings) – classmethod from_params(embedding, metric='cosine_sim', **kwargs)[source] Initialize DocArrayInMemorySearch store. Parameters embedding (Embeddings) – Embedding function. metric (str) – metric for exact nearest-neighbor search. Can be one of: β€œcosine_sim”, β€œeuclidean_dist” and β€œsqeuclidean_dist”. Defaults to β€œcosine_sim”. **kwargs – Other keyword arguments to be passed to the get_doc_cls method. kwargs (Any) – Return type langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Create an DocArrayInMemorySearch store and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[Dict[Any, Any]]]) – Metadata for each text if it exists. Defaults to None. metric (str) – metric for exact nearest-neighbor search. Can be one of: β€œcosine_sim”, β€œeuclidean_dist” and β€œsqeuclidean_dist”. Defaults to β€œcosine_sim”. kwargs (Any) – Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-49
Defaults to β€œcosine_sim”. kwargs (Any) – Returns DocArrayInMemorySearch Vector Store Return type langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url, index_name, embedding, *, ssl_verify=None)[source] Bases: langchain.vectorstores.base.VectorStore, abc.ABC Wrapper around Elasticsearch as a vector database. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the β€œDeployments” page. To obtain your Elastic Cloud password for the default β€œelastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to β€œSecurity” > β€œUsers” Locate the β€œelastic” user and click β€œEdit” Click β€œReset password” Follow the prompts to reset the password The format for Elastic Cloud URLs is
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-50
Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) Parameters elasticsearch_url (str) – The URL for the Elasticsearch instance. index_name (str) – The name of the Elasticsearch index for the embeddings. embedding (Embeddings) – An object that provides the ability to embed text. It should be an instance of a class that subclasses the Embeddings abstract base class, such as OpenAIEmbeddings() ssl_verify (Optional[Dict[str, Any]]) – Raises ValueError – If the elasticsearch python package is not installed. add_texts(texts, metadatas=None, refresh_indices=True, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. refresh_indices (bool) – bool to refresh ElasticSearch indices ids (Optional[List[str]]) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-51
Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query. Parameters query (str) – k (int) – filter (Optional[dict]) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, elasticsearch_url=None, index_name=None, refresh_indices=True, **kwargs)[source] Construct ElasticVectorSearch wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in the Elasticsearch instance. Adds the documents to the newly created Elasticsearch index. This is intended to be a quick way to get started. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch.from_texts( texts, embeddings, elasticsearch_url="http://localhost:9200" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – elasticsearch_url (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-52
elasticsearch_url (Optional[str]) – index_name (Optional[str]) – refresh_indices (bool) – kwargs (Any) – Return type langchain.vectorstores.elastic_vector_search.ElasticVectorSearch create_index(client, index_name, mapping)[source] Parameters client (Any) – index_name (str) – mapping (Dict) – Return type None client_search(client, index_name, script_query, size)[source] Parameters client (Any) – index_name (str) – script_query (Dict) – size (int) – Return type Any delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.FAISS(embedding_function, index, docstore, index_to_docstore_id, relevance_score_fn=<function _default_relevance_score_fn>, normalize_L2=False)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around FAISS vector database. To use, you should have the faiss python package installed. Example from langchain import FAISS faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id) Parameters embedding_function (Callable) – index (Any) – docstore (Docstore) – index_to_docstore_id (Dict[int, str]) – relevance_score_fn (Optional[Callable[[float], float]]) – normalize_L2 (bool) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-53
Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of unique IDs. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] add_embeddings(text_embeddings, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters text_embeddings (Iterable[Tuple[str, List[float]]]) – Iterable pairs of string and embedding to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of unique IDs. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, Any]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. **kwargs – kwargs to be passed to similarity search. Can include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-54
filter the resulting set of retrieved docs kwargs (Any) – Returns List of documents most similar to the query text and L2 distance in float for each. Lower score represents more similarity. Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score(query, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of documents most similar to the query text with L2 distance in float. Lower score represents more similarity. Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of Documents most similar to the embedding. Return type List[langchain.schema.Document] similarity_search(query, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-55
Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, Any]]) – (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] max_marginal_relevance_search_with_score_by_vector(embedding, *, k=4, fetch_k=20, lambda_mult=0.5, filter=None)[source] Return docs and their similarity scores selected using the maximal marginalrelevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – Returns List of Documents and similarity scores selected by maximal marginalrelevance and score for each. Return type List[Tuple[langchain.schema.Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-56
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering (if needed) to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] merge_from(target)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-57
Return type List[langchain.schema.Document] merge_from(target)[source] Merge another FAISS object with the current one. Add the target FAISS to the current one. Parameters target (langchain.vectorstores.faiss.FAISS) – FAISS object you wish to merge into the current one Returns None. Return type None classmethod from_texts(texts, embedding, metadatas=None, ids=None, **kwargs)[source] Construct FAISS wrapper from raw documents. This is a user friendly interface that: Embeds documents. Creates an in memory docstore Initializes the FAISS database This is intended to be a quick way to get started. Example from langchain import FAISS from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() faiss = FAISS.from_texts(texts, embeddings) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – kwargs (Any) – Return type langchain.vectorstores.faiss.FAISS classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ids=None, **kwargs)[source] Construct FAISS wrapper from raw documents. This is a user friendly interface that: Embeds documents. Creates an in memory docstore Initializes the FAISS database This is intended to be a quick way to get started. Example from langchain import FAISS from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-58
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings) Parameters text_embeddings (List[Tuple[str, List[float]]]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – kwargs (Any) – Return type langchain.vectorstores.faiss.FAISS save_local(folder_path, index_name='index')[source] Save FAISS index, docstore, and index_to_docstore_id to disk. Parameters folder_path (str) – folder path to save index, docstore, and index_to_docstore_id to. index_name (str) – for saving with a specific index file name Return type None classmethod load_local(folder_path, embeddings, index_name='index')[source] Load FAISS index, docstore, and index_to_docstore_id from disk. Parameters folder_path (str) – folder path to load index, docstore, and index_to_docstore_id from. embeddings (langchain.embeddings.base.Embeddings) – Embeddings to use when generating queries index_name (str) – for saving with a specific index file name Return type langchain.vectorstores.faiss.FAISS class langchain.vectorstores.Hologres(connection_string, embedding_function, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, logger=None)[source] Bases: langchain.vectorstores.base.VectorStore VectorStore implementation using Hologres. connection_string is a hologres connection string. embedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface. ndims is the number of dimensions of the embedding output. table_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-59
- NOTE: The table will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. pre_delete_table if True, will delete the table if it exists.(default: False) - Useful for testing. Parameters connection_string (str) – embedding_function (Embeddings) – ndims (int) – table_name (str) – pre_delete_table (bool) – logger (Optional[logging.Logger]) – Return type None create_vector_extension()[source] Return type None create_table()[source] Return type None add_embeddings(texts, embeddings, metadatas, ids, **kwargs)[source] Add embeddings to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. embeddings (List[List[float]]) – List of list of embedding vectors. metadatas (List[dict]) – List of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (List[str]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Run similarity search with Hologres with distance.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-60
Run similarity search with Hologres with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source] Parameters embedding (List[float]) – k (int) – filter (Optional[dict]) – Return type List[Tuple[langchain.schema.Document, float]]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-61
Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Postgres connection string is required β€œEither pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source] Construct Hologres wrapper from raw documents and pre- generated embeddings. Return VectorStore initialized from documents and embeddings. Postgres connection string is required β€œEither pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Example from langchain import Hologres from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings) Parameters text_embeddings (List[Tuple[str, List[float]]]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ndims (int) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-62
metadatas (Optional[List[dict]]) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod from_existing_index(embedding, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, **kwargs)[source] Get intsance of an existing Hologres store.This method will return the instance of the store without inserting any new embeddings Parameters embedding (langchain.embeddings.base.Embeddings) – ndims (int) – table_name (str) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod get_connection_string(kwargs)[source] Parameters kwargs (Dict[str, Any]) – Return type str classmethod from_documents(documents, embedding, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_collection=False, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Postgres connection string is required β€œEither pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_collection (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod connection_string_from_db_params(host, port, database, user, password)[source]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-63
Return connection string from database parameters. Parameters host (str) – port (int) – database (str) – user (str) – password (str) – Return type str class langchain.vectorstores.LanceDB(connection, embedding, vector_key='vector', id_key='id', text_key='text')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around LanceDB vector database. To use, you should have lancedb python package installed. Example db = lancedb.connect('./lancedb') table = db.open_table('my_table') vectorstore = LanceDB(table, embedding_function) vectorstore.add_texts(['text1', 'text2']) result = vectorstore.similarity_search('text1') Parameters connection (Any) – embedding (Embeddings) – vector_key (Optional[str]) – id_key (Optional[str]) – text_key (Optional[str]) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Turn texts into embedding and add it to the database Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. kwargs (Any) – Returns List of ids of the added texts. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return documents most similar to the query Parameters query (str) – String to query the vectorstore with. k (int) – Number of documents to return. kwargs (Any) – Returns
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-64
kwargs (Any) – Returns List of documents most similar to the query. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, connection=None, vector_key='vector', id_key='id', text_key='text', **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – connection (Any) – vector_key (Optional[str]) – id_key (Optional[str]) – text_key (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.lancedb.LanceDB class langchain.vectorstores.MatchingEngine(project_id, index, endpoint, embedding, gcs_client, gcs_bucket_name, credentials=None)[source] Bases: langchain.vectorstores.base.VectorStore Vertex Matching Engine implementation of the vector store. While the embeddings are stored in the Matching Engine, the embedded documents will be stored in GCS. An existing Index and corresponding Endpoint are preconditions for using this module. See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb Note that this implementation is mostly meant for reading if you are planning to do a real time implementation. While reading is a real time operation, updating the index takes close to one hour. Parameters project_id (str) – index (MatchingEngineIndex) – endpoint (MatchingEngineIndexEndpoint) – embedding (Embeddings) – gcs_client (storage.Client) – gcs_bucket_name (str) – credentials (Optional[Credentials]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-65
gcs_bucket_name (str) – credentials (Optional[Credentials]) – add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters. Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – The string that will be used to search for similar documents. k (int) – The amount of neighbors that will be retrieved. kwargs (Any) – Returns A list of k matching documents. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Use from components instead. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.matching_engine.MatchingEngine classmethod from_components(project_id, region, gcs_bucket_name, index_id, endpoint_id, credentials_path=None, embedding=None)[source] Takes the object creation out of the constructor. Parameters project_id (str) – The GCP project id. region (str) – The default location making the API calls. It must have regional. (the same location as the GCS bucket and must be) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-66
regional. (the same location as the GCS bucket and must be) – gcs_bucket_name (str) – The location where the vectors will be stored in created. (order for the index to be) – index_id (str) – The id of the created index. endpoint_id (str) – The id of the created endpoint. credentials_path (Optional[str]) – (Optional) The path of the Google credentials on system. (the local file) – embedding (Optional[langchain.embeddings.base.Embeddings]) – The Embeddings that will be used for texts. (embedding the) – Returns A configured MatchingEngine with the texts added to the index. Return type langchain.vectorstores.matching_engine.MatchingEngine class langchain.vectorstores.Milvus(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around the Milvus vector database. Parameters embedding_function (Embeddings) – collection_name (str) – connection_args (Optional[dict[str, Any]]) – consistency_level (str) – index_params (Optional[dict]) – search_params (Optional[dict]) – drop_old (Optional[bool]) – add_texts(texts, metadatas=None, timeout=None, batch_size=1000, **kwargs)[source] Insert text data into Milvus. Inserting data when the collection has not be made yet will result in creating a new Collection. The data of the first entity decides the schema of the new collection, the dim is extracted from the first embedding and the columns are decided by the first metadata dict.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-67
embedding and the columns are decided by the first metadata dict. Metada keys will need to be present for all inserted values. At the moment there is no None equivalent in Milvus. Parameters texts (Iterable[str]) – The texts to embed, it is assumed that they all fit in memory. metadatas (Optional[List[dict]]) – Metadata dicts attached to each of the texts. Defaults to None. timeout (Optional[int]) – Timeout for each batch insert. Defaults to None. batch_size (int, optional) – Batch size to use for insertion. Defaults to 1000. kwargs (Any) – Raises MilvusException – Failure to add texts Returns The resulting keys for each inserted element. Return type List[str] similarity_search(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a similarity search against the query string. Parameters query (str) – The text to search. k (int, optional) – How many results to return. Defaults to 4. param (dict, optional) – The search params for the index type. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] similarity_search_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a similarity search against the query string. Parameters embedding (List[float]) – The embedding vector to search. k (int, optional) – How many results to return. Defaults to 4.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-68
k (int, optional) – How many results to return. Defaults to 4. param (dict, optional) – The search params for the index type. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] similarity_search_with_score(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search on a query string and return results with score. For more information about the search parameters, take a look at the pymilvus documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md Parameters query (str) – The text being searched. k (int, optional) – The amount of results ot return. Defaults to 4. param (dict) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Return type List[float], List[Tuple[Document, any, any]] similarity_search_with_score_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search on a query string and return results with score. For more information about the search parameters, take a look at the pymilvus documentation found here:
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-69
documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md Parameters embedding (List[float]) – The embedding vector being searched. k (int, optional) – The amount of results ot return. Defaults to 4. param (dict) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Result doc and score. Return type List[Tuple[Document, float]] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search and return results that are reordered by MMR. Parameters query (str) – The text being searched. k (int, optional) – How many results to give. Defaults to 4. fetch_k (int, optional) – Total results to select k from. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5 param (dict, optional) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document]
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-70
Returns Document results for search. Return type List[Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search and return results that are reordered by MMR. Parameters embedding (str) – The embedding vector being searched. k (int, optional) – How many results to give. Defaults to 4. fetch_k (int, optional) – Total results to select k from. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5 param (dict, optional) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source] Create a Milvus collection, indexes it with HNSW, and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-71
metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. collection_name (str, optional) – Collection name to use. Defaults to β€œLangChainCollection”. connection_args (dict[str, Any], optional) – Connection args to use. Defaults to DEFAULT_MILVUS_CONNECTION. consistency_level (str, optional) – Which consistency level to use. Defaults to β€œSession”. index_params (Optional[dict], optional) – Which index_params to use. Defaults to None. search_params (Optional[dict], optional) – Which search params to use. Defaults to None. drop_old (Optional[bool], optional) – Whether to drop the collection with that name if it exists. Defaults to False. kwargs (Any) – Returns Milvus Vector Store Return type Milvus class langchain.vectorstores.Zilliz(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source] Bases: langchain.vectorstores.milvus.Milvus Parameters embedding_function (Embeddings) – collection_name (str) – connection_args (Optional[dict[str, Any]]) – consistency_level (str) – index_params (Optional[dict]) – search_params (Optional[dict]) – drop_old (Optional[bool]) – classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source] Create a Zilliz collection, indexes it with HNSW, and insert data. Parameters texts (List[str]) – Text data.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-72
Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. collection_name (str, optional) – Collection name to use. Defaults to β€œLangChainCollection”. connection_args (dict[str, Any], optional) – Connection args to use. Defaults to DEFAULT_MILVUS_CONNECTION. consistency_level (str, optional) – Which consistency level to use. Defaults to β€œSession”. index_params (Optional[dict], optional) – Which index_params to use. Defaults to None. search_params (Optional[dict], optional) – Which search params to use. Defaults to None. drop_old (Optional[bool], optional) – Whether to drop the collection with that name if it exists. Defaults to False. kwargs (Any) – Returns Zilliz Vector Store Return type Zilliz class langchain.vectorstores.SingleStoreDB(embedding, *, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore This class serves as a Pythonic interface to the SingleStore DB database. The prerequisite for using this class is the installation of the singlestoredb Python package. The SingleStoreDB vectorstore can be created by providing an embedding function and the relevant parameters for the database connection, connection pool, and optionally, the names of the table and the fields to use. Parameters embedding (Embeddings) – distance_strategy (DistanceStrategy) – table_name (str) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-73
distance_strategy (DistanceStrategy) – table_name (str) – content_field (str) – metadata_field (str) – vector_field (str) – pool_size (int) – max_overflow (int) – timeout (float) – kwargs (Any) – vector_field Pass the rest of the kwargs to the connection. connection_kwargs Add program name and version to connection attributes. add_texts(texts, metadatas=None, embeddings=None, **kwargs)[source] Add more texts to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None. kwargs (Any) – Returns empty list Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Returns the most similar indexed documents to the query text. Uses cosine similarity. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. filter (dict) – A dictionary of metadata fields and values to filter by. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] Examples similarity_search_with_score(query, k=4, filter=None)[source] Return docs most similar to query. Uses cosine similarity. Parameters query (str) – Text to look up documents similar to.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-74
Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – A dictionary of metadata fields and values to filter by. Defaults to None. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source] Create a SingleStoreDB vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new table for the embeddings in SingleStoreDB. Adds the documents to the newly created table. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – distance_strategy (langchain.vectorstores.singlestoredb.DistanceStrategy) – table_name (str) – content_field (str) – metadata_field (str) – vector_field (str) – pool_size (int) – max_overflow (int) – timeout (float) – kwargs (Any) – Return type langchain.vectorstores.singlestoredb.SingleStoreDB as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.singlestoredb.SingleStoreDBRetriever
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-75
Return type langchain.vectorstores.singlestoredb.SingleStoreDBRetriever class langchain.vectorstores.Clarifai(user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Clarifai AI platform’s vector store. To use, you should have the clarifai python package installed. Example from langchain.vectorstores import Clarifai from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Clarifai("langchain_store", embeddings.embed_query) Parameters user_id (Optional[str]) – app_id (Optional[str]) – pat (Optional[str]) – number_of_docs (Optional[int]) – api_base (Optional[str]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Add texts to the Clarifai vectorstore. This will push the text to a Clarifai application. Application use base workflow that create and store embedding for each text. Make sure you are using a base workflow that is compatible with text (such as Language Understanding). Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search_with_score(query, k=4, filter=None, namespace=None, **kwargs)[source] Run similarity search with score using Clarifai. Parameters query (str) – Query text to search for.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-76
Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. None. (Defaults to) – namespace (Optional[str]) – kwargs (Any) – Returns List of documents most simmilar to the query text. Return type List[Document] similarity_search(query, k=4, **kwargs)[source] Run similarity search using Clarifai. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source] Create a Clarifai vectorstore from a list of texts. Parameters user_id (str) – User ID. app_id (str) – App ID. texts (List[str]) – List of texts to add. pat (Optional[str]) – Personal access token. Defaults to None. number_of_docs (Optional[int]) – Number of documents to return None. (Defaults to) – api_base (Optional[str]) – API base. Defaults to None. metadatas (Optional[List[dict]]) – Optional list of metadatas. None. – embedding (Optional[langchain.embeddings.base.Embeddings]) – kwargs (Any) – Returns Clarifai vectorstore. Return type Clarifai
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-77
Returns Clarifai vectorstore. Return type Clarifai classmethod from_documents(documents, embedding=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source] Create a Clarifai vectorstore from a list of documents. Parameters user_id (str) – User ID. app_id (str) – App ID. documents (List[Document]) – List of documents to add. pat (Optional[str]) – Personal access token. Defaults to None. number_of_docs (Optional[int]) – Number of documents to return None. (during vector search. Defaults to) – api_base (Optional[str]) – API base. Defaults to None. embedding (Optional[langchain.embeddings.base.Embeddings]) – kwargs (Any) – Returns Clarifai vectorstore. Return type Clarifai class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url, index_name, embedding_function, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around OpenSearch as a vector database. Example from langchain import OpenSearchVectorSearch opensearch_vector_search = OpenSearchVectorSearch( "http://localhost:9200", "embeddings", embedding_function ) Parameters opensearch_url (str) – index_name (str) – embedding_function (Embeddings) – kwargs (Any) – add_texts(texts, metadatas=None, ids=None, bulk_size=500, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-78
Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. bulk_size (int) – Bulk API request count; Default: 500 kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] Optional Args:vector_field: Document field embeddings are stored in. Defaults to β€œvector_field”. text_field: Document field the text of the document is stored in. Defaults to β€œtext”. similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. By default, supports Approximate Search. Also supports Script Scoring and Painless Scripting. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] Optional Args:vector_field: Document field embeddings are stored in. Defaults to β€œvector_field”. text_field: Document field the text of the document is stored in. Defaults to β€œtext”. metadata_field: Document field that metadata is stored in. Defaults to β€œmetadata”. Can be set to a special value β€œ*” to include the entire document. Optional Args for Approximate Search:search_type: β€œapproximate_search”; default: β€œapproximate_search” boolean_filter: A Boolean filter consists of a Boolean query that contains a k-NN query and a filter. subquery_clause: Query clause on the knn vector field; default: β€œmust”
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-79
subquery_clause: Query clause on the knn vector field; default: β€œmust” lucene_filter: the Lucene algorithm decides whether to perform an exact k-NN search with pre-filtering or an approximate search with modified post-filtering. Optional Args for Script Scoring Search:search_type: β€œscript_scoring”; default: β€œapproximate_search” space_type: β€œl2”, β€œl1”, β€œlinf”, β€œcosinesimil”, β€œinnerproduct”, β€œhammingbit”; default: β€œl2” pre_filter: script_score query to pre-filter documents before identifying nearest neighbors; default: {β€œmatch_all”: {}} Optional Args for Painless Scripting Search:search_type: β€œpainless_scripting”; default: β€œapproximate_search” space_type: β€œl2Squared”, β€œl1Norm”, β€œcosineSimilarity”; default: β€œl2Squared” pre_filter: script_score query to pre-filter documents before identifying nearest neighbors; default: {β€œmatch_all”: {}} similarity_search_with_score(query, k=4, **kwargs)[source] Return docs and it’s scores most similar to query. By default, supports Approximate Search. Also supports Script Scoring and Painless Scripting. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents along with its scores most similar to the query. Return type List[Tuple[langchain.schema.Document, float]] Optional Args:same as similarity_search max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-80
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type list[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, bulk_size=500, **kwargs)[source] Construct OpenSearchVectorSearch wrapper from raw documents. Example from langchain import OpenSearchVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() opensearch_vector_search = OpenSearchVectorSearch.from_texts( texts, embeddings, opensearch_url="http://localhost:9200" ) OpenSearch by default supports Approximate Search powered by nmslib, faiss and lucene engines recommended for large datasets. Also supports brute force search through Script Scoring and Painless Scripting. Optional Args:vector_field: Document field embeddings are stored in. Defaults to β€œvector_field”. text_field: Document field the text of the document is stored in. Defaults to β€œtext”. Optional Keyword Args for Approximate Search:engine: β€œnmslib”, β€œfaiss”, β€œlucene”; default: β€œnmslib”
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-81
space_type: β€œl2”, β€œl1”, β€œcosinesimil”, β€œlinf”, β€œinnerproduct”; default: β€œl2” ef_search: Size of the dynamic list used during k-NN searches. Higher values lead to more accurate but slower searches; default: 512 ef_construction: Size of the dynamic list used during k-NN graph creation. Higher values lead to more accurate graph but slower indexing speed; default: 512 m: Number of bidirectional links created for each new element. Large impact on memory consumption. Between 2 and 100; default: 16 Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – bulk_size (int) – kwargs (Any) – Return type langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch class langchain.vectorstores.MongoDBAtlasVectorSearch(collection, embedding, *, index_name='default', text_key='text', embedding_key='embedding')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around MongoDB Atlas Vector Search. To use, you should have both: - the pymongo python package installed - a connection string associated with a MongoDB Atlas Cluster having deployed an Atlas Search index Example from langchain.vectorstores import MongoDBAtlasVectorSearch from langchain.embeddings.openai import OpenAIEmbeddings from pymongo import MongoClient mongo_client = MongoClient("<YOUR-CONNECTION-STRING>") collection = mongo_client["<db_name>"]["<collection_name>"] embeddings = OpenAIEmbeddings() vectorstore = MongoDBAtlasVectorSearch(collection, embeddings) Parameters collection (Collection[MongoDBDocumentType]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-82
Parameters collection (Collection[MongoDBDocumentType]) – embedding (Embeddings) – index_name (str) – text_key (str) – embedding_key (str) – classmethod from_connection_string(connection_string, namespace, embedding, **kwargs)[source] Parameters connection_string (str) – namespace (str) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[Dict[str, Any]]]) – Optional list of metadatas associated with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List similarity_search_with_score(query, *, k=4, pre_filter=None, post_filter_pipeline=None)[source] Return MongoDB documents most similar to query, along with scores. Use the knnBeta Operator available in MongoDB Atlas Search This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes. For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta Parameters query (str) – Text to look up documents similar to. k (int) – Optional Number of Documents to return. Defaults to 4. pre_filter (Optional[dict]) – Optional Dictionary of argument(s) to prefilter on document fields.
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-83
fields. post_filter_pipeline (Optional[List[Dict]]) – Optional Pipeline of MongoDB aggregation stages following the knnBeta search. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, pre_filter=None, post_filter_pipeline=None, **kwargs)[source] Return MongoDB documents most similar to query. Use the knnBeta Operator available in MongoDB Atlas Search This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes. For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta Parameters query (str) – Text to look up documents similar to. k (int) – Optional Number of Documents to return. Defaults to 4. pre_filter (Optional[dict]) – Optional Dictionary of argument(s) to prefilter on document fields. post_filter_pipeline (Optional[List[Dict]]) – Optional Pipeline of MongoDB aggregation stages following the knnBeta search. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, collection=None, **kwargs)[source] Construct MongoDBAtlasVectorSearch wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene) This is intended to be a quick way to get started. Example Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) –
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-84
embedding (Embeddings) – metadatas (Optional[List[dict]]) – collection (Optional[Collection[MongoDBDocumentType]]) – kwargs (Any) – Return type MongoDBAtlasVectorSearch class langchain.vectorstores.MyScale(embedding, config=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around MyScale vector database You need a clickhouse-connect python package, and a valid account to connect to MyScale. MyScale can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/) Parameters embedding (Embeddings) – config (Optional[MyScaleSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create Myscale wrapper with existing texts
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-85
Create Myscale wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (MyScaleSettings, Optional) – Myscale configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to MyScale. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns MyScale Index Return type langchain.vectorstores.myscale.MyScale similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale by vectors Parameters query (str) – query string
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-86
Perform a similarity search with MyScale by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str pydantic settings langchain.vectorstores.MyScaleSettings[source] Bases: pydantic.env_settings.BaseSettings MyScale Client Configuration Attribute:
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-87
Bases: pydantic.env_settings.BaseSettings MyScale Client Configuration Attribute: myscale_host (str)An URL to connect to MyScale backend.Defaults to β€˜localhost’. myscale_port (int) : URL port to connect with HTTP. Defaults to 8443. username (str) : Username to login. Defaults to None. password (str) : Password to login. Defaults to None. index_type (str): index type string. index_param (dict): index build parameter. database (str) : Database name to find the table. Defaults to β€˜default’. table (str) : Table name to operate on. Defaults to β€˜vector_table’. metric (str)Metric to compute distance,supported are (β€˜l2’, β€˜cosine’, β€˜ip’). Defaults to β€˜cosine’. column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, must be same size to number of columns. For example: .. code-block:: python {β€˜id’: β€˜text_id’, β€˜vector’: β€˜text_embedding’, β€˜text’: β€˜text_plain’, β€˜metadata’: β€˜metadata_dictionary_in_json’, } Defaults to identity map. Show JSON schema{ "title": "MyScaleSettings",
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-88
Show JSON schema{ "title": "MyScaleSettings", "description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.", "type": "object", "properties": { "host": { "title": "Host", "default": "localhost", "env_names": "{'myscale_host'}", "type": "string" }, "port": { "title": "Port",
https://api.python.langchain.com/en/latest/modules/vectorstores.html
81b746c66a31-89
}, "port": { "title": "Port", "default": 8443, "env_names": "{'myscale_port'}", "type": "integer" }, "username": { "title": "Username", "env_names": "{'myscale_username'}", "type": "string" }, "password": { "title": "Password", "env_names": "{'myscale_password'}", "type": "string" }, "index_type": { "title": "Index Type", "default": "IVFFLAT", "env_names": "{'myscale_index_type'}", "type": "string" }, "index_param": { "title": "Index Param", "env_names": "{'myscale_index_param'}", "type": "object", "additionalProperties": { "type": "string" } }, "column_map": { "title": "Column Map", "default": { "id": "id", "text": "text", "vector": "vector", "metadata": "metadata" }, "env_names": "{'myscale_column_map'}", "type": "object", "additionalProperties": { "type": "string" } }, "database": { "title": "Database", "default": "default", "env_names": "{'myscale_database'}", "type": "string" }, "table": { "title": "Table",
https://api.python.langchain.com/en/latest/modules/vectorstores.html