id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
f630cc88fdd9-8 | Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
field openweathermap_api_key: Optional[str] = None#
field owm: Any = None#
run(location: str) β str[source]#
Get the current weather information for a specified location.
pydantic model langchain.utilities.PowerBIDataset[source]#
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Validators
fix_table_names Β» table_names
token_or_credential_present Β» all fields
field aiosession: Optional[aiohttp.ClientSession] = None#
field credential: Optional[TokenCredential] = None#
field dataset_id: str [Required]#
field group_id: Optional[str] = None#
field impersonated_user_name: Optional[str] = None#
field sample_rows_in_table_info: int = 1#
Constraints
exclusiveMinimum = 0
maximum = 10
field schemas: Dict[str, str] [Optional]#
field table_names: List[str] [Required]#
field token: Optional[str] = None#
async aget_table_info(table_names: Optional[Union[List[str], str]] = None) β str[source]#
Get information about specified tables.
async arun(command: str) β Any[source]#
Execute a DAX command and return the result asynchronously.
get_schemas() β str[source]#
Get the available schemaβs. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-9 | get_schemas() β str[source]#
Get the available schemaβs.
get_table_info(table_names: Optional[Union[List[str], str]] = None) β str[source]#
Get information about specified tables.
get_table_names() β Iterable[str][source]#
Get names of tables available.
run(command: str) β Any[source]#
Execute a DAX command and return a json representing the results.
property headers: Dict[str, str]#
Get the token.
property request_url: str#
Get the request url.
property table_info: str#
Information about all tables in the database.
pydantic model langchain.utilities.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) β str[source]#
Run command with own globals/locals and returns anything printed.
pydantic model langchain.utilities.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-10 | # note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings Β» unsecure
validate_params Β» all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β str[source]#
Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query β The query to search for.
query_suffix β Extra suffix appended to the query.
num_results β Limit the number of results to return.
engines β List of engines to use for the query.
categories β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-11 | **kwargs β extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query β The query to search for.
query_suffix β Extra suffix appended to the query.
engines β List of engines to use for the query.
categories β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError β If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
pydantic model langchain.utilities.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed, | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-12 | To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) β dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) β str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) β Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) β dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) β str[source]#
Run query through SerpAPI and parse result.
class langchain.utilities.SparkSQL(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]#
classmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) β langchain.utilities.spark_sql.SparkSQL[source]#
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(βsc://localhost:15002β) | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-13 | For example: SparkSQL.from_uri(βsc://localhost:15002β)
get_table_info(table_names: Optional[List[str]] = None) β str[source]#
get_table_info_no_throw(table_names: Optional[List[str]] = None) β str[source]#
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
get_usable_table_names() β Iterable[str][source]#
Get names of tables available.
run(command: str, fetch: str = 'all') β str[source]#
run_no_throw(command: str, fetch: str = 'all') β str[source]#
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
pydantic model langchain.utilities.TextRequestsWrapper[source]#
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field headers: Optional[Dict[str, str]] = None#
async adelete(url: str, **kwargs: Any) β str[source]#
DELETE the URL and return the text asynchronously.
async aget(url: str, **kwargs: Any) β str[source]#
GET the URL and return the text asynchronously.
async apatch(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]# | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-14 | PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) β str[source]#
DELETE the URL and return the text.
get(url: str, **kwargs: Any) β str[source]#
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PUT the URL and return the text.
property requests: langchain.requests.Requests#
pydantic model langchain.utilities.TwilioAPIWrapper[source]#
Sms Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
field account_sid: Optional[str] = None#
Twilio account string identifier. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-15 | field account_sid: Optional[str] = None#
Twilio account string identifier.
field auth_token: Optional[str] = None#
Twilio auth token.
field from_number: Optional[str] = None#
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
run(body: str, to: str) β str[source]#
Run body through Twilio and respond with message sid.
Parameters
body β The text of the message you want to send. Can be up to 1,600
characters in length.
to β The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
pydantic model langchain.utilities.WikipediaAPIWrapper[source]#
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
f630cc88fdd9-16 | fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
field doc_content_chars_max: int = 4000#
field lang: str = 'en'#
field load_all_available_meta: bool = False#
field top_k_results: int = 3#
load(query: str) β List[langchain.schema.Document][source]#
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
run(query: str) β str[source]#
Run Wikipedia search and get page summaries.
pydantic model langchain.utilities.WolframAlphaAPIWrapper[source]#
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
field wolfram_alpha_appid: Optional[str] = None#
run(query: str) β str[source]#
Run query through WolframAlpha and parse result.
previous
Agent Toolkits
next
Experimental Modules
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/utilities.html |
5af2e21b3d66-0 | .rst
.pdf
Vector Stores
Vector Stores#
Wrappers on top of vector stores.
class langchain.vectorstores.AnalyticDB(connection_string: str, embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, pre_delete_collection: bool = False, logger: Optional[logging.Logger] = None)[source]#
VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- connection_string is a postgres connection string.
- embedding_function any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name is the name of the collection to use. (default: langchain)
NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_collection if True, will delete the collection if it exists.(default: False)
- Useful for testing.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
connect() β sqlalchemy.engine.base.Connection[source]#
classmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) β str[source]#
Return connection string from database parameters. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-1 | Return connection string from database parameters.
create_collection() β None[source]#
create_tables_if_not_exists() β None[source]#
delete_collection() β None[source]#
drop_tables() β None[source]#
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) β langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) β langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
get_collection(session: sqlalchemy.orm.session.Session) β Optional[langchain.vectorstores.analyticdb.CollectionStore][source]#
classmethod get_connection_string(kwargs: Dict[str, Any]) β str[source]#
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with AnalyticDB with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-2 | k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) β List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-3 | Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings β List of tuples of (text, embedding)
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings)) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-4 | text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from raw documents.
Parameters
texts β List of documents to index.
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) β langchain.vectorstores.annoy.Annoy[source]#
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-5 | and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
k β Number of Documents to return. Defaults to 4.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-6 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
process_index_results(idxs: List[int], dists: List[float]) β List[Tuple[langchain.schema.Document, float]][source]#
Turns annoy results into a list of documents and scores.
Parameters
idxs β List of indices of the documents in the index.
dists β List of distances of the documents in the index.
Returns
List of Documents and scores.
save_local(folder_path: str, prefault: bool = False) β None[source]#
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to save index, docstore,
and index_to_docstore_id to.
prefault β Whether to pre-load the index into memory.
similarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query.
similarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to docstore_index.
Parameters
docstore_index β Index of document in docstore
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-7 | to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-8 | Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomicβs neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]]) β An optional list of ids. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-9 | ids (Optional[List[str]]) β An optional list of ids.
refresh (bool) β Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) β Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) β Name of the collection to create.
api_key (str) β Your nomic API key,
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-10 | index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) β The list of texts to ingest.
name (str) β Name of the project to create.
api_key (str) β Your nomic API key,
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-11 | Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with AtlasDB
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() β None[source]#
Delete the collection. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-12 | Return type
List[str]
delete_collection() β None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-13 | Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) β List of texts to add to the collection.
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
get(include: Optional[List[str]] = None) β Dict[str, Any][source]#
Gets the collection.
Parameters
include (Optional[List[str]]) β List of fields to include from db.
Defaults to None.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Parameters
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-14 | filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Parameters
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-15 | filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:type embedding: str
:param k: Number of Documents to return. Defaults to 4.
:type k: int
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
update_document(document_id: str, document: langchain.schema.Document) β None[source]#
Update a document in the collection.
Parameters
document_id (str) β ID of the document to update.
document (Document) β Document to update. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-16 | document (Document) β Document to update.
class langchain.vectorstores.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 0, verbose: bool = True, **kwargs: Any)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
We implement naive similarity search and filtering for fast prototyping,
but it can be extended with Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesnβt require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-17 | Returns
List of IDs of the added texts.
Return type
List[str]
delete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) β bool[source]#
Delete the entities in the dataset
Parameters
ids (Optional[List[str]], optional) β The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) β The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) β Whether to drop the dataset.
Defaults to None.
delete_dataset() β None[source]#
Delete the collection.
classmethod force_delete_by_path(path: str) β None[source]#
Force delete dataset by path
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) β langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Parameters
path (str, pathlib.Path) β
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use βactiveloop loginβ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the form``gcs://bucketname/path/to/dataset``Credentials are required | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-18 | in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesnβtsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) β List of documents to add.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-19 | Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β text to embed and run the query on.
k β Number of Documents to return.
Defaults to 4.
query β Text to look up documents similar to.
embedding β Embedding function to use.
Defaults to None.
k β Number of Documents to return.
Defaults to 4.
distance_metric β L2 for Euclidean, L1 for Nuclear, max
L-infinity distance, cos for cosine similarity, βdotβ for dot product
Defaults to L2.
filter β Attribute filter by metadata example {βkeyβ: βvalueβ}.
Defaults to None.
maximal_marginal_relevance β Whether to use maximal marginal relevance.
Defaults to False.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score β Whether to return the score. Defaults to False.
Returns
List of Documents most similar to the query vector. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-20 | Returns
List of Documents most similar to the query vector.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Deep Lake with distance returned.
Parameters
query (str) β Query text to search for.
distance_metric β L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, βdotβ for dot product.
Defaults to L2.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DocArrayHnswSearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around HnswLib storage.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-21 | You can install it with pip install βlangchain[docarray]β.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, work_dir: str, n_dim: int, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, index: bool = True, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1, **kwargs: Any) β langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Initialize DocArrayHnswSearch store.
Parameters
embedding (Embeddings) β Embedding function.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
dist_metric (str) β Distance metric for DocArrayHnswSearch can be one of:
βcosineβ, βipβ, and βl2β. Defaults to βcosineβ.
max_elements (int) β Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool) β Whether an index should be built for this field.
Defaults to True.
ef_construction (int) β defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int) β parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int) β parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16.
allow_replace_deleted (bool) β Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int) β Sets the number of cpu threads to use. Defaults to 1. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-22 | num_threads (int) β Sets the number of cpu threads to use. Defaults to 1.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, work_dir: Optional[str] = None, n_dim: Optional[int] = None, **kwargs: Any) β langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Create an DocArrayHnswSearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
**kwargs β Other keyword arguments to be passed to the __init__ method.
Returns
DocArrayHnswSearch Vector Store
class langchain.vectorstores.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) β langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) β Embedding function. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-23 | Parameters
embedding (Embeddings) β Embedding function.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) β langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) β Metadata for each text
if it exists. Defaults to None.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
Returns
DocArrayInMemorySearch Vector Store
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]#
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index", | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-24 | elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) β The URL for the Elasticsearch instance.
index_name (str) β The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) β An object that provides the ability to embed text. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-25 | embedding (Embeddings) β An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError β If the elasticsearch python package is not installed.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
refresh_indices β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) β langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-26 | embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: typing.Callable, index: typing.Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: typing.Dict[int, str], relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score_fn>, normalize_L2: bool = False)[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-27 | Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings β Iterable pairs of string and embedding to
add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-28 | faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings, index_name: str = 'index') β langchain.vectorstores.faiss.FAISS[source]#
Load FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries
index_name β for saving with a specific index file name
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-29 | fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) β None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target β FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str, index_name: str = 'index') β None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to save index, docstore,
and index_to_docstore_id to.
index_name β for saving with a specific index file name | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-30 | and index_to_docstore_id to.
index_name β for saving with a specific index file name
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
embedding β Embedding vector to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-31 | Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.LanceDB(connection: Any, embedding: langchain.embeddings.base.Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]#
Wrapper around LanceDB vector database.
To use, you should have lancedb python package installed.
Example
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Turn texts into embedding and add it to the database
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
Returns
List of ids of the added texts.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) β langchain.vectorstores.lancedb.LanceDB[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return documents most similar to the query
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-32 | Return documents most similar to the query
Parameters
query β String to query the vectorstore with.
k β Number of documents to return.
Returns
List of documents most similar to the query.
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#
Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) β List[str][source]#
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) β The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) β Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]) β Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) β Batch size to use for insertion.
Defaults to 1000.
Raises
MilvusException β Failure to add texts
Returns
The resulting keys for each inserted element.
Return type | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-33 | Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) β langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Milvus Vector Store
Return type
Milvus | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-34 | Returns
Milvus Vector Store
Return type
Milvus
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) β The text being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) β The embedding vector being searched. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-35 | Parameters
embedding (str) β The embedding vector being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
query (str) β The text to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document] | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-36 | Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
embedding (List[float]) β The embedding vector to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) β The text being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-37 | Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) β The embedding vector being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.MyScale(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, **kwargs: Any)[source]#
Wrapper around MyScale vector database
You need a clickhouse-connect python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-38 | to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
ids β Optional list of ids to associate with the texts.
batch_size β Batch size of insertion
metadata β Optional column data to be inserted
Returns
List of ids from adding the texts into the vectorstore.
drop() β None[source]#
Helper function: Drop data
escape_str(value: str) β str[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) β langchain.vectorstores.myscale.MyScale[source]#
Create Myscale wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (MyScaleSettings, Optional) β Myscale configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to MyScale.
Defaults to 32. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-39 | Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns
MyScale Index
property metadata_column: str#
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with MyScale by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-40 | Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents
Return type
List[Document]
pydantic settings langchain.vectorstores.MyScaleSettings[source]#
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Usernamed to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βl2β, βcosineβ, βipβ). Defaults to βcosineβ.
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-41 | must be same size to number of columns. For example:
.. code-block:: python
{
βidβ: βtext_idβ,
βvectorβ: βtext_embeddingβ,
βtextβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "MyScaleSettings",
"description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Usernamed to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": { | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-42 | "type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'myscale_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8443,
"env_names": "{'myscale_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'myscale_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'myscale_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "IVFFLAT",
"env_names": "{'myscale_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"env_names": "{'myscale_index_param'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata"
},
"env_names": "{'myscale_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": { | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-43 | "type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'myscale_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'myscale_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "cosine",
"env_names": "{'myscale_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = myscale_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Dict[str, str]])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
field column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}#
field database: str = 'default'#
field host: str = 'localhost'#
field index_param: Optional[Dict[str, str]] = None#
field index_type: str = 'IVFFLAT'#
field metric: str = 'cosine'#
field password: Optional[str] = None#
field port: int = 8443#
field table: str = 'langchain'#
field username: Optional[str] = None# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-44 | field table: str = 'langchain'#
field username: Optional[str] = None#
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings, **kwargs: Any)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
bulk_size β Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) β langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-45 | texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
Optional Keyword Args for Approximate Search:engine: βnmslibβ, βfaissβ, βluceneβ; default: βnmslibβ
space_type: βl2β, βl1β, βcosinesimilβ, βlinfβ, βinnerproductβ; default: βl2β
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-46 | Returns
List of Documents most similar to the query.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
metadata_field: Document field that metadata is stored in. Defaults to
βmetadataβ.
Can be set to a special value β*β to include the entire document.
Optional Args for Approximate Search:search_type: βapproximate_searchβ; default: βapproximate_searchβ
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: βmustβ
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:search_type: βscript_scoringβ; default: βapproximate_searchβ
space_type: βl2β, βl1β, βlinfβ, βcosinesimilβ, βinnerproductβ,
βhammingbitβ; default: βl2β
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
Optional Args for Painless Scripting Search:search_type: βpainless_scriptingβ; default: βapproximate_searchβ
space_type: βl2Squaredβ, βl1Normβ, βcosineSimilarityβ; default: βl2Squaredβ
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-47 | Return docs and itβs scores most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents along with its scores most similar to the query.
Optional Args:same as similarity_search
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
namespace β Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-48 | Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) β langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) β langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return pinecone documents most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-49 | k β Number of Documents to return. Defaults to 4.
filter β Dictionary of argument(s) to filter on metadata
namespace β Namespace to search in. Default will search in ββ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Dictionary of argument(s) to filter on metadata
namespace β Namespace to search in. Default will search in ββ namespace.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embeddings: Optional[langchain.embeddings.base.Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', embedding_function: Optional[Callable] = None)[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-50 | Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.qdrant.Qdrant[source]#
Construct Qdrant wrapper from a list of texts.
Parameters
texts β A list of texts to be indexed in Qdrant.
embedding β A subclass of Embeddings, responsible for text vectorization.
metadatas β An optional list of metadata. If provided it has to be of the same
length as a list of texts.
location β If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url β either host or str of βOptional[scheme], host, Optional[port],
Optional[prefix]β. Default: None
port β Port of the REST API interface. Default: 6333 | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-51 | port β Port of the REST API interface. Default: 6333
grpc_port β Port of the gRPC interface. Default: 6334
prefer_grpc β If true - use gPRC interface whenever possible in custom methods.
Default: False
https β If true - use HTTPS(SSL) protocol. Default: None
api_key β API key for authentication in Qdrant Cloud. Default: None
prefix β If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout β Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host β Host name of Qdrant service. If url and host are None, set to
βlocalhostβ. Default: None
path β Path in which the vectors will be stored while using local mode.
Default: None
collection_name β Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func β Distance function. One of: βCosineβ / βEuclidβ / βDotβ.
Default: βCosineβ
content_payload_key β A payload key used to store the content of the document.
Default: βpage_contentβ
metadata_payload_key β A payload key used to store the metadata of the document.
Default: βmetadataβ
**kwargs β Additional arguments passed directly into REST client initialization
This is a user friendly interface that:
Creates embeddings, one for each text
Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-52 | This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool, dict, list]]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-53 | Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool, dict, list]]] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each.
class langchain.vectorstores.Redis(redis_url: str, index_name: str, embedding_function: typing.Callable, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score>, **kwargs: typing.Any)[source]#
Wrapper around Redis vector database.
To use, you should have the redis python package installed.
Example
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any) β List[str][source]#
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-54 | Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
keys (Optional[List[str]], optional) β Optional key values to use as ids.
Defaults to None.
batch_size (int, optional) β Batch size to use for writes. Defaults to 1000.
Returns
List of ids added to the vectorstore
Return type
List[str]
as_retriever(**kwargs: Any) β langchain.vectorstores.redis.RedisVectorStoreRetriever[source]#
static drop_index(index_name: str, delete_documents: bool, **kwargs: Any) β bool[source]#
Drop a Redis search index.
Parameters
index_name (str) β Name of the index to drop.
delete_documents (bool) β Whether to drop the associated documents.
Returns
Whether or not the drop was successful.
Return type
bool
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) β langchain.vectorstores.redis.Redis[source]#
Connect to an existing Redis index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) β langchain.vectorstores.redis.Redis[source]#
Create a Redis vectorstore from raw documents. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-55 | Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
classmethod from_texts_return_keys(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['COSINE', 'IP', 'L2'] = 'COSINE', **kwargs: Any) β Tuple[langchain.vectorstores.redis.Redis, List[str]][source]#
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_limit_score(query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text within the | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-56 | Returns the most similar indexed documents to the query text within the
score_threshold range.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
score_threshold (float) β The minimum matching score required for a document
0.2. (to be considered a match. Defaults to) β
similarity (Because the similarity calculation algorithm is based on cosine) β
:param :
:param the smaller the angle:
:param the higher the similarity.:
Returns
A list of documents that are most similar to the query text,
including the match score for each document.
Return type
List[Document]
Note
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
similarity_search_with_score(query: str, k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.SKLearnVectorStore(embedding: langchain.embeddings.base.Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]#
A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-57 | Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) β langchain.vectorstores.sklearn.SKLearnVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
persist() β None[source]#
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.SupabaseVectorStore(client: supabase.client.Client, embedding: Embeddings, table_name: str, query_name: Union[str, None] = None)[source]#
VectorStore for a Supabase postgres database. Assumes you have the pgvector
extension installed and a match_documents (or similar) function. For more details:
https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase
You can implement your own match_documents function in order to limit the search
space to a subset of documents based on your own authorization or business logic.
Note that the Supabase Python client does not yet support async operations.
If youβd like to use max_marginal_relevance_search, please review the instructions | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-58 | If youβd like to use max_marginal_relevance_search, please review the instructions
below on modifying the match_documents function to return matched embeddings.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict[Any, Any]]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
add_vectors(vectors: List[List[float]], documents: List[langchain.schema.Document]) β List[str][source]#
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[supabase.client.Client] = None, table_name: Optional[str] = 'documents', query_name: Union[str, None] = 'match_documents', **kwargs: Any) β SupabaseVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-59 | of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search requires that query_name returns matched
embeddings alongside the match documents. The following function function
demonstrates how to do this:
```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),
match_count int)
RETURNS TABLE(id bigint,
content text,
metadata jsonb,
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGINRETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(docstore.embedding <=> query_embedding) AS similarity
FROMdocstore
ORDER BYdocstore.embedding <=> query_embedding
LIMIT match_count;
END;
$$;```
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
query_name: str# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-60 | Returns
List of Documents selected by maximal marginal relevance.
query_name: str#
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector_returning_embeddings(query: List[float], k: int) β List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]][source]#
similarity_search_by_vector_with_relevance_scores(query: List[float], k: int) β List[Tuple[langchain.schema.Document, float]][source]#
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query β input text
k β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
table_name: str# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-61 | Returns
List of Tuples of (doc, similarity_score)
table_name: str#
class langchain.vectorstores.Tair(embedding_function: langchain.embeddings.base.Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Add texts data to an existing index.
create_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) β bool[source]#
static drop_index(index_name: str = 'langchain', **kwargs: Any) β bool[source]#
Drop an existing index.
Parameters
index_name (str) β Name of the index to drop.
Returns
True if the index is dropped successfully.
Return type
bool
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from documents and embeddings.
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Connect to an existing Tair index. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-62 | Connect to an existing Tair index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]#
Wrapper around Typesense vector search.
To use, you should have the typesense python package installed.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-63 | "connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client,
typesense_collection_name,
embedding.embed_query,
"text",
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embedding and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_client_params(embedding: langchain.embeddings.base.Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) β langchain.vectorstores.typesense.Typesense[source]#
Initialize Typesense directly from client parameters.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-64 | protocol="http",
typesense_collection_name="langchain-memory",
)
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, typesense_client: Optional[Client] = None, typesense_client_params: Optional[dict] = None, typesense_collection_name: Optional[str] = None, text_key: str = 'text', **kwargs: Any) β Typesense[source]#
Construct Typesense wrapper from raw text.
similarity_search(query: str, k: int = 4, filter: Optional[str] = '', **kwargs: Any) β List[langchain.schema.Document][source]#
Return typesense documents most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[str] = '') β List[Tuple[langchain.schema.Document, float]][source]#
Return typesense documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Vectara(vectara_customer_id: Optional[str] = None, vectara_corpus_id: Optional[str] = None, vectara_api_key: Optional[str] = None)[source]#
Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-65 | Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
as_retriever(**kwargs: Any) β langchain.vectorstores.vectara.VectaraRetriever[source]#
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.vectara.Vectara[source]#
Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
.. rubric:: Example
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
similarity_search(query: str, k: int = 5, alpha: float = 0.025, filter: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return Vectara documents most similar to query, along with scores.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-66 | Return Vectara documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 5.
filter β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
Returns
List of Documents most similar to the query
similarity_search_with_score(query: str, k: int = 5, alpha: float = 0.025, filter: Optional[str] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return Vectara documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 5.
alpha β parameter for hybrid search (called βlambdaβ in Vectara
documentation).
filter β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
Returns
List of Documents most similar to the query and score for each.
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str] | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-67 | Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-68 | Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) β langchain.vectorstores.base.VectorStoreRetriever[source]#
async asearch(query: str, search_type: str, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-69 | Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-70 | lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query β input text
k β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-71 | Returns
List of Tuples of (doc, similarity_score)
class langchain.vectorstores.Weaviate(client: typing.Any, index_name: str, text_key: str, embedding: typing.Optional[langchain.embeddings.base.Embeddings] = None, attributes: typing.Optional[typing.List[str]] = None, relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_score_normalizer>, by_text: bool = True)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.weaviate.Weaviate[source]#
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-72 | weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-73 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_text(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Look up similar documents by embedding vector in Weaviate.
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.Zilliz(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
5af2e21b3d66-74 | classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) β langchain.vectorstores.zilliz.Zilliz[source]#
Create a Zilliz collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Zilliz Vector Store
Return type
Zilliz
previous
Document Loaders
next
Retrievers
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
4266ae1c78ed-0 | .rst
.pdf
Document Compressors
Document Compressors#
pydantic model langchain.retrievers.document_compressors.CohereRerank[source]#
field client: Client [Required]#
field model: str = 'rerank-english-v2.0'#
field top_n: int = 3#
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
pydantic model langchain.retrievers.document_compressors.DocumentCompressorPipeline[source]#
Document compressor that uses a pipeline of transformers.
field transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]#
List of document filters that are chained together and run in sequence.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Transform a list of documents.
pydantic model langchain.retrievers.document_compressors.EmbeddingsFilter[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embedding document contents and queries.
field k: Optional[int] = 20#
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
4266ae1c78ed-1 | similarity_threshold must be specified. Defaults to 20.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field similarity_threshold: Optional[float] = None#
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter documents based on similarity of their embeddings to the query.
pydantic model langchain.retrievers.document_compressors.LLMChainExtractor[source]#
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for compressing documents.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress page content of raw documents asynchronously.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress page content of raw documents. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
4266ae1c78ed-2 | Compress page content of raw documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) β langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#
Initialize from LLM.
pydantic model langchain.retrievers.document_compressors.LLMChainFilter[source]#
Filter that drops documents that arenβt relevant to the query.
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents based on their relevance to the query.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) β langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#
previous
Retrievers
next
Document Transformers
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
c3daaef5ae33-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ArxivRetriever[source]#
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.AzureCognitiveSearchRetriever[source]#
Wrapper around Azure Cognitive Search.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
ClientSession, in case we want to reuse connection for better performance.
field api_key: str = ''#
API Key. Both Admin and Query keys work, but for reading data itβs
recommended to use a Query key.
field api_version: str = '2020-06-30'#
API version
field content_key: str = 'content'#
Key in a retrieved result to set as the Document page_content.
field index_name: str = ''#
Name of Index inside Azure Cognitive Search service
field service_name: str = ''#
Name of Azure Cognitive Search service
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-1 | get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field bearer_token: str [Required]#
field filter: Optional[dict] = None#
field top_k: int = 3#
field url: str [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ContextualCompressionRetriever[source]#
Retriever that wraps a base retriever and compresses the results.
field base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#
Compressor for compressing retrieved documents.
field base_retriever: langchain.schema.BaseRetriever [Required]#
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-2 | Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
api_key: Optional[str]#
datastore_url: str#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
top_k: Optional[int]#
class langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-3 | Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) β List[str][source]#
Run more texts through the embeddings and add to the retriver.
Parameters
texts β Iterable of strings to add to the retriever.
refresh_indices β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) β langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.KNNRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-4 | Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.retrievers.knn.KNNRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#
field alpha: float = 0.5#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field sparse_encoder: Any = None#
field top_k: int = 4#
add_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) β None[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-5 | Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.RemoteLangChainRetriever[source]#
field headers: Optional[dict] = None#
field input_key: str = 'message'#
field metadata_key: str = 'metadata'#
field page_content_key: str = 'page_content'#
field response_key: str = 'response'#
field url: str [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SVMRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.retrievers.svm.SVMRetriever[source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-6 | get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SelfQueryRetriever[source]#
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
The LLMChain for generating the vector store queries.
field search_kwargs: dict [Optional]#
Keyword arguments to pass in to the vector store search.
field search_type: str = 'similarity'#
The search type to perform on the vector store.
field structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]#
Translator for turning internal query language into vectorstore search params.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The underlying vector store from which documents will be retrieved.
field verbose: bool = False#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, **kwargs: Any) β langchain.retrievers.self_query.base.SelfQueryRetriever[source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-7 | get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]#
field k: int = 4#
field tfidf_array: Any = None#
field vectorizer: Any = None#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_documents(documents: Iterable[langchain.schema.Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) β langchain.retrievers.tfidf.TFIDFRetriever[source]#
classmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) β langchain.retrievers.tfidf.TFIDFRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#
Retriever combining embedding similarity with recency.
field decay_rate: float = 0.01#
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
field default_salience: Optional[float] = None# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-8 | field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document] [Optional]#
The memory_stream of documents to search through.
field other_score_keys: List[str] = []#
Other keys in the metadata to factor into the score, e.g. βimportanceβ.
field search_kwargs: dict [Optional]#
Keyword arguments to pass to the vectorstore similarity search.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The vectorstore to store documents and determine salience.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Add documents to vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Add documents to vectorstore.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_salient_docs(query: str) β Dict[int, Tuple[langchain.schema.Document, float]][source]#
Return documents that are salient to the query.
class langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-9 | Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) β langchain.retrievers.vespa_retriever.VespaRetriever[source]#
Instantiate retriever from params.
Parameters
url (str) β Vespa app URL.
content_field (str) β Field in results to return as Document page_content.
k (Optional[int]) β Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") β Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) β Sources to retrieve
from. Defaults to None.
_filter (Optional[str]) β Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) β Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) β Keyword arguments added to query body.
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-10 | class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extra = 'forbid'#
add_documents(docs: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Upload documents to Weaviate.
async aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) β List[langchain.schema.Document][source]#
Look up similar documents in Weaviate.
pydantic model langchain.retrievers.WikipediaRetriever[source]#
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
c3daaef5ae33-11 | Parameters
query β string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#
A Retriever implementation for the Zep long-term memory store. Search your
userβs long-term chat history with Zep.
Note: You will need to provide the userβs session_id to use this retriever.
More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
previous
Vector Stores
next
Document Compressors
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2ffc1cddc9d3-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) β None[source]#
Add texts to in memory dictionary.
search(search: str) β Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) β Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
Indexes
next
Text Splitter
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/docstore.html |