id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
ba56fe40175a-0 | .ipynb
.pdf
SVM
Contents
Create New Retriever with Texts
Use Retriever
SVM#
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
#!pip install scikit-learn
#!pip install lark
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Self-querying
next
TF-IDF
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm.html |
ee676f9c3de6-0 | .ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
Metal is a managed service for ML Embeddings.
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
ee676f9c3de6-1 | previous
LOTR (Merger Retriever)
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
ddc054d924da-0 | .ipynb
.pdf
Azure Cognitive Search
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
Azure Cognitive Search#
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you’ll work with the following capabilities:
A search engine for full text search over a search index containing user-owned content
Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
Programmability through REST APIs and client libraries in Azure SDKs
Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.
Set up Azure Cognitive Search#
To set up ACS, please follow the instrcutions here.
Please note
the name of your ACS service,
the name of your ACS index,
your API key.
Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.
Using the Azure Cognitive Search Retriever#
import os
from langchain.retrievers import AzureCognitiveSearchRetriever
Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] ="<YOUR_ACS_INDEX_NAME>" | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html |
ddc054d924da-1 | os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"
Create the Retriever
retriever = AzureCognitiveSearchRetriever(content_key="content")
Now you can use retrieve documents from Azure Cognitive Search
retriever.get_relevant_documents("what is langchain")
previous
AWS Kendra
next
ChatGPT Plugin
Contents
Set up Azure Cognitive Search
Using the Azure Cognitive Search Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/azure_cognitive_search.html |
c4d79f4fb09c-0 | .ipynb
.pdf
Self-querying with Qdrant
Contents
Creating a Qdrant vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Qdrant#
Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful
In the notebook we’ll demo the SelfQueryRetriever wrapped around a Qdrant vector store.
Creating a Qdrant vectorstore#
First we’ll want to create a Chroma VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.
#!pip install lark qdrant-client
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
# import os
# import getpass
# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Qdrant
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
c4d79f4fb09c-1 | Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
c4d79f4fb09c-2 | type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
c4d79f4fb09c-3 | query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?")
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
c4d79f4fb09c-4 | [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
c4d79f4fb09c-5 | Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
previous
PubMed Retriever
next
Self-querying
Contents
Creating a Qdrant vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/qdrant_self_query.html |
b96d7d3dd3ed-0 | .ipynb
.pdf
LOTR (Merger Retriever)
Contents
Remove redundant results from the merged retrievers.
LOTR (Merger Retriever)#
Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.
The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.
import os
import chromadb
from langchain.retrievers.merger_retriever import MergerRetriever
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.retrievers import ContextualCompressionRetriever
# Get 3 diff embeddings.
all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")
filter_embeddings = OpenAIEmbeddings()
ABS_PATH = os.path.dirname(os.path.abspath(__file__))
DB_DIR = os.path.join(ABS_PATH, "db")
# Instantiate 2 diff cromadb indexs, each one with a diff embedding. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html |
b96d7d3dd3ed-1 | # Instantiate 2 diff cromadb indexs, each one with a diff embedding.
client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=DB_DIR,
anonymized_telemetry=False,
)
db_all = Chroma(
collection_name="project_store_all",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=all_mini,
)
db_multi_qa = Chroma(
collection_name="project_store_multi",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=multi_qa_mini,
)
# Define 2 diff retrievers with 2 diff embeddings and diff search type.
retriever_all = db_all.as_retriever(
search_type="similarity", search_kwargs={"k": 5, "include_metadata": True}
)
retriever_multi_qa = db_multi_qa.as_retriever(
search_type="mmr", search_kwargs={"k": 5, "include_metadata": True}
)
# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other
# retriever on different types of chains.
lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])
Remove redundant results from the merged retrievers.#
# We can remove redundant results from both retrievers using yet another embedding.
# Using multiples embeddings in diff steps could help reduce biases.
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)
pipeline = DocumentCompressorPipeline(transformers=[filter])
compression_retriever = ContextualCompressionRetriever(
base_compressor=pipeline, base_retriever=lotr
)
previous | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html |
b96d7d3dd3ed-2 | base_compressor=pipeline, base_retriever=lotr
)
previous
kNN
next
Metal
Contents
Remove redundant results from the merged retrievers.
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/merger_retriever.html |
de2de9353f7f-0 | .ipynb
.pdf
ChatGPT Plugin
Contents
Using the ChatGPT Retriever Plugin
ChatGPT Plugin#
OpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT’s capabilities and allowing it to perform a wide range of actions.
Plugins can allow ChatGPT to do things like:
Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc.
Retrieve knowledge-base information; e.g., company docs, personal notes, etc.
Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
from typing import List
from langchain.docstore.document import Document
import json
def write_json(path: str, documents: List[Document])-> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
de2de9353f7f-1 | Using the ChatGPT Retriever Plugin#
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
de2de9353f7f-2 | Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
previous
Azure Cognitive Search
next
Self-querying with Chroma
Contents
Using the ChatGPT Retriever Plugin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin.html |
701cc57412b8-0 | .ipynb
.pdf
Vespa
Vespa#
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
This notebook shows how to use Vespa.ai as a LangChain retriever.
In order to create a retriever, we use pyvespa to
create a connection a Vespa service.
#!pip install pyvespa
from vespa.application import Vespa
vespa_app = Vespa(url="https://doc-search.vespa.oath.cloud")
This creates a connection to a Vespa service, here the Vespa documentation search service.
Using pyvespa package, you can also connect to a
Vespa Cloud instance
or a local
Docker instance.
After connecting to the service, you can set up the retriever:
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
"yql": "select content from paragraph where userQuery()",
"hits": 5,
"ranking": "documentation",
"locale": "en-us"
}
vespa_content_field = "content"
retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field)
This sets up a LangChain retriever that fetches documents from the Vespa application.
Here, up to 5 results are retrieved from the content field in the paragraph document type,
using doumentation as the ranking method. The userQuery() is replaced with the actual query
passed from LangChain.
Please refer to the pyvespa documentation
for more information.
Now you can return the results and continue using the results in LangChain.
retriever.get_relevant_documents("what is vespa?") | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html |
701cc57412b8-1 | retriever.get_relevant_documents("what is vespa?")
previous
VectorStore
next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vespa.html |
a1e52cf4c921-0 | .ipynb
.pdf
Self-querying with Weaviate
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Weaviate#
Creating a Weaviate vectorstore#
First we’ll want to create a Weaviate VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.
#!pip install lark weaviate-client
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
import os
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
a1e52cf4c921-1 | Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Weaviate.from_documents(
docs, embeddings, weaviate_url="http://127.0.0.1:8080"
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
a1e52cf4c921-2 | llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
a1e52cf4c921-3 | We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]
previous
Weaviate Hybrid Search
next
Wikipedia
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html |
a834fcc40dd5-0 | .ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
Weaviate is an open source vector database.
Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.
The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
Set up the retriever:
#!pip install weaviate-client
import weaviate
import os
WEAVIATE_URL = os.getenv("WEAVIATE_URL")
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),
additional_headers={
"X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"),
},
)
# client.schema.delete_all()
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
/workspaces/langchain/langchain/vectorstores/analyticdb.py:20: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base() # type: Any
retriever = WeaviateHybridSearchRetriever(
client, index_name="LangChain", text_key="text"
)
Add some data:
docs = [
Document(
metadata={ | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
a834fcc40dd5-1 | )
Add some data:
docs = [
Document(
metadata={
"title": "Embracing The Future: AI Unveiled",
"author": "Dr. Rebecca Simmons",
},
page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.",
),
Document(
metadata={
"title": "Symbiosis: Harmonizing Humans and AI",
"author": "Prof. Jonathan K. Sterling",
},
page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.",
),
Document(
metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"},
page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.",
),
Document(
metadata={
"title": "Conscious Constructs: The Search for AI Sentience",
"author": "Dr. Samuel Cortez",
},
page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.",
),
Document(
metadata={
"title": "Invisible Routines: Hidden AI in Everyday Life",
"author": "Prof. Jonathan K. Sterling",
}, | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
a834fcc40dd5-2 | "author": "Prof. Jonathan K. Sterling",
},
page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.",
),
]
retriever.add_documents(docs)
['eda16d7d-437d-4613-84ae-c2e38705ec7a',
'04b501bf-192b-4e72-be77-2fbbe7e67ebf',
'18a1acdb-23b7-4482-ab04-a6c2ed51de77',
'88e82cc3-c020-4b5a-b3c6-ca7cf3fc6a04',
'f6abd9d5-32ed-46c4-bd08-f8d0f7c9fc95']
Do a hybrid search:
retriever.get_relevant_documents("the ethical implications of AI")
[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),
Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
a834fcc40dd5-3 | Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]
Do a hybrid search with where filter:
retriever.get_relevant_documents(
"AI integration in society",
where_filter={
"path": ["author"],
"operator": "Equal",
"valueString": "Prof. Jonathan K. Sterling",
},
)
[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})]
previous
Vespa
next
Self-querying with Weaviate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
db0de848ad48-0 | .ipynb
.pdf
Arxiv
Contents
Installation
Examples
Running retriever
Question Answering on facts
Arxiv#
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.
Installation#
First, you need to install arxiv python package.
#!pip install arxiv
ArxivRetriever has these arguments:
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org
Examples#
Running retriever#
from langchain.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query='1605.08386')
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch', | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
db0de848ad48-1 | 'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # a content of the Document
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
Question Answering on facts#
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [ | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
db0de848ad48-2 | questions = [
"What are Heat-bath random walks with Markov base?",
"What is the ImageBind model?",
"How does Compositional Reasoning with Large Language Models works?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base?
**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term?
-> **Question**: What is the ImageBind model?
**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks.
-> **Question**: How does Compositional Reasoning with Large Language Models works? | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
db0de848ad48-3 | -> **Question**: How does Compositional Reasoning with Large Language Models works?
**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones.
In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed.
The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
questions = [
"What are Heat-bath random walks with Markov base? Include references to answer.",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
db0de848ad48-4 | **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
previous
Retrievers
next
AWS Kendra
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html |
e56c5e03b7c6-0 | .ipynb
.pdf
AWS Kendra
Contents
Using the AWS Kendra Index Retriever
AWS Kendra#
AWS Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.
With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.
Using the AWS Kendra Index Retriever#
#!pip install boto3
import boto3
from langchain.retrievers import AwsKendraIndexRetriever
Create New Retriever
kclient = boto3.client('kendra', region_name="us-east-1")
retriever = AwsKendraIndexRetriever(
kclient=kclient,
kendraindex="kendraindex",
)
Now you can use retrieved documents from AWS Kendra Index
retriever.get_relevant_documents("what is langchain")
previous
Arxiv
next
Azure Cognitive Search
Contents
Using the AWS Kendra Index Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/aws_kendra_index_retriever.html |
d8e1480bac46-0 | .ipynb
.pdf
TF-IDF
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
TF-IDF#
TF-IDF means term-frequency times inverse document-frequency.
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package.
For more information on the details of TF-IDF see this blog post.
# !pip install scikit-learn
from langchain.retrievers import TFIDFRetriever
Create New Retriever with Texts#
retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Create a New Retriever with Documents#
You can now create a new retriever with the documents you created.
from langchain.schema import Document
retriever = TFIDFRetriever.from_documents([Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar")])
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
SVM
next
Time Weighted VectorStore
Contents
Create New Retriever with Texts
Create a New Retriever with Documents
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf.html |
e6dc0535abc1-0 | .ipynb
.pdf
PubMed Retriever
PubMed Retriever#
This notebook goes over how to use PubMed as a retriever
PubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
from langchain.retrievers import PubMedRetriever
retriever = PubMedRetriever()
retriever.get_relevant_documents("chatgpt")
[Document(page_content='', metadata={'uid': '37268021', 'title': 'Dermatology in the wake of an AI revolution: who gets a say?', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>31</Day>'}),
Document(page_content='', metadata={'uid': '37267643', 'title': 'What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>30</Day>'}),
Document(page_content='The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.', metadata={'uid': '37266721', 'title': 'The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.', 'pub_date': '<Year>2023</Year><Month>Jun</Month><Day>02</Day>'})] | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html |
e6dc0535abc1-1 | previous
Pinecone Hybrid Search
next
Self-querying with Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pubmed.html |
28f0f7081065-0 | .ipynb
.pdf
Cohere Reranker
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
Cohere Reranker#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This notebook shows how to use Cohere’s rerank endpoint in a retriever. This builds on top of ideas in the ContextualCompressionRetriever.
#!pip install cohere
#!pip install faiss
# OR (depending on Python version)
#!pip install faiss-cpu
# get a new token: https://dashboard.cohere.ai/
import os
import getpass
os.environ['COHERE_API_KEY'] = getpass.getpass('Cohere API Key:')
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Set up the base vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-1 | texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(search_kwargs={"k": 20})
query = "What did the president say about Ketanji Brown Jackson"
docs = retriever.get_relevant_documents(query)
pretty_print_docs(docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 4:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-2 | Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 5:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 6:
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well.
America used to have the best roads, bridges, and airports on Earth.
Now our infrastructure is ranked 13th in the world.
----------------------------------------------------------------------------------------------------
Document 7:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-3 | It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
Document 8:
For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else.
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
----------------------------------------------------------------------------------------------------
Document 9:
All told, we created 369,000 new manufacturing jobs in America just last year.
Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight.
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
----------------------------------------------------------------------------------------------------
Document 10:
I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
And fourth, let’s end cancer as we know it.
This is personal to me and Jill, to Kamala, and to so many of you.
Cancer is the #2 cause of death in America–second only to heart disease.
----------------------------------------------------------------------------------------------------
Document 11:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-4 | The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 12:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
----------------------------------------------------------------------------------------------------
Document 13:
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.
----------------------------------------------------------------------------------------------------
Document 14:
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.
----------------------------------------------------------------------------------------------------
Document 15:
Third, support our veterans.
Veterans are the best of us. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-5 | Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
----------------------------------------------------------------------------------------------------
Document 16:
When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation.
And I know you’re tired, frustrated, and exhausted.
But I also know this.
----------------------------------------------------------------------------------------------------
Document 17:
Now is the hour.
Our moment of responsibility.
Our test of resolve and conscience, of history itself.
It is in this moment that our character is formed. Our purpose is found. Our future is forged.
Well I know this nation.
We will meet the test.
To protect freedom and liberty, to expand fairness and opportunity.
We will save democracy.
As hard as these times have been, I am more optimistic about America today than I have been my whole life.
----------------------------------------------------------------------------------------------------
Document 18:
He didn’t know how to stop fighting, and neither did she.
Through her pain she found purpose to demand we do better.
Tonight, Danielle—we are.
The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.
And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.
----------------------------------------------------------------------------------------------------
Document 19: | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-6 | ----------------------------------------------------------------------------------------------------
Document 19:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 20:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
Doing reranking with CohereRerank#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
llm = OpenAI(temperature=0)
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-7 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
You can of course use this retriever within a QA pipeline
from langchain.chains import RetrievalQA
chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), retriever=compression_retriever)
chain({"query": query})
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."}
previous
Self-querying with Chroma
next
Contextual Compression
Contents | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
28f0f7081065-8 | previous
Self-querying with Chroma
next
Contextual Compression
Contents
Set up the base vector store retriever
Doing reranking with CohereRerank
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/cohere-reranker.html |
50dc7dbc9d7e-0 | .ipynb
.pdf
Getting Started
Contents
Add texts
From Documents
Getting Started#
This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this.
This covers generic high level functionality related to all vector stores.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html |
50dc7dbc9d7e-1 | One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Add texts#
You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream).
docsearch.add_texts(["Ankush went to Princeton"])
['a05e3d0c-ab40-11ed-a853-e65801318981']
query = "Where did Ankush go to college?"
docs = docsearch.similarity_search(query)
docs[0]
Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0)
From Documents#
We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata).
documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}])
docsearch = Chroma.from_documents(documents, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html |
50dc7dbc9d7e-2 | We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Vectorstores
next
AnalyticDB
Contents
Add texts
From Documents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html |
8a148b41b975-0 | .ipynb
.pdf
Redis
Contents
Installing
Example
Redis as Retriever
Redis#
Redis (Remote Dictionary Server) is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability.
This notebook shows how to use functionality related to the Redis vector database.
Installing#
!pip install redis
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
Example#
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.redis import Redis
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link')
rds.index_name
'link'
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
8a148b41b975-1 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
print(rds.add_texts(["Ankush went to Princeton"]))
['doc:link:d7d02e3faf1b40bbbe29a683ff75b280']
query = "Princeton"
results = rds.similarity_search(query)
print(results[0].page_content)
Ankush went to Princeton
# Load from existing index
rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link')
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
8a148b41b975-2 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Redis as Retriever#
Here we go over different options for using the vector store as a retriever.
There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.
retriever = rds.as_retriever()
docs = retriever.get_relevant_documents(query)
We can also use similarity_limit as a search method. This is only return documents if they are similar enough
retriever = rds.as_retriever(search_type="similarity_limit")
# Here we can see it doesn't return any results because there are no relevant documents
retriever.get_relevant_documents("where did ankush go to college?")
previous
Qdrant
next
SingleStoreDB vector search
Contents
Installing
Example
Redis as Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html |
2253129e66b5-0 | .ipynb
.pdf
ClickHouse Vector Search
Contents
Setting up envrionments
Get connection info and data schema
Clickhouse table schema
Filtering
Deleting your data
ClickHouse Vector Search#
ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.
This notebook shows how to use functionality related to the ClickHouse vector search.
Setting up envrionments#
Setting up local clickhouse server with docker (optional)
! docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11
Setup up clickhouse client driver
!pip install clickhouse-connect
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
if not os.environ['OPENAI_API_KEY']:
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Clickhouse, ClickhouseSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings() | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html |
2253129e66b5-1 | docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for d in docs:
d.metadata = {'some': 'metadata'}
settings = ClickhouseSettings(table="clickhouse_vector_search_example")
docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Get connection info and data schema#
print(str(docsearch))
default.clickhouse_vector_search_example @ localhost:8123
username: None
Table Schema:
---------------------------------------------------
|id |Nullable(String) |
|document |Nullable(String) |
|embedding |Array(Float32) |
|metadata |Object('json') |
|uuid |UUID |
--------------------------------------------------- | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html |
2253129e66b5-2 | |uuid |UUID |
---------------------------------------------------
Clickhouse table schema#
Clickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.
print(f"Clickhouse Table DDL:\n\n{docsearch.schema}")
Clickhouse Table DDL:
CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example(
id Nullable(String),
document Nullable(String),
embedding Array(Float32),
metadata JSON,
uuid UUID DEFAULT generateUUIDv4(),
CONSTRAINT cons_vec_len CHECK length(embedding) = 1536,
INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000
) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192
Filtering#
You can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
If you custimized your column_map under your setting, you search with filter like this:
from langchain.vectorstores import Clickhouse, ClickhouseSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = Clickhouse.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html |
2253129e66b5-3 | meta = docsearch.metadata_column
output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?',
k=4, where_str=f"{meta}.doc_id<10")
for d, dist in output:
print(dist, d.metadata, d.page_content[:20] + '...')
0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam...
0.6997970363474885 {'doc_id': 8} And so many families...
0.7044504914336727 {'doc_id': 1} Groups of citizens b...
0.7053558702165094 {'doc_id': 6} And I’m taking robus...
Deleting your data#
docsearch.drop()
previous
Chroma
next
Deep Lake
Contents
Setting up envrionments
Get connection info and data schema
Clickhouse table schema
Filtering
Deleting your data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/clickhouse.html |
3f40017b794a-0 | .ipynb
.pdf
ElasticSearch
Contents
ElasticSearch
ElasticVectorSearch class
Installation
Example
ElasticKnnSearch Class
Test adding vectors
Test knn search using query vector builder
Test knn search using pre generated vector
Test source option
Test fields option
Test with es client connection rather than cloud_id
ElasticSearch#
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
This notebook shows how to use functionality related to the Elasticsearch database.
ElasticVectorSearch class#
Installation#
Check out Elasticsearch installation instructions.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user: | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-1 | To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
Format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
!pip install elasticsearch
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
Example#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200")
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-2 | docs = db.similarity_search(query)
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
ElasticKnnSearch Class#
The ElasticKnnSearch implements features allowing storing vectors and documents in Elasticsearch for use with approximate kNN search
!pip install langchain elasticsearch
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings
import elasticsearch
# Initialize ElasticsearchEmbeddings
model_id = "<model_id_from_es>"
dims = dim_count
es_cloud_id = "ESS_CLOUD_ID"
es_user = "es_user"
es_password = "es_pass"
test_index = "<index_name>"
#input_field = "your_input_field" # if different from 'text_field'
# Generate embedding object
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
#input_field=input_field, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-3 | model_id,
#input_field=input_field,
es_cloud_id=es_cloud_id,
es_user=es_user,
es_password=es_password,
)
# Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
es_cloud_id=es_cloud_id,
es_user=es_user,
es_password=es_password,
index_name= test_index,
embedding= embeddings
)
Test adding vectors#
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."]
knn_search.from_texts(new_texts, dims=dims)
Test knn search using query vector builder#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)
print(f"kNN search results for query '{query}': {knn_result}")
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2)
print(f"Hybrid search results for query '{query}': {hybrid_result}")
print(f"The 'text' field value from the top hit is: '{hybrid_result['hits']['hits'][0]['_source']['text']}'")
Test knn search using pre generated vector# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-4 | Test knn search using pre generated vector#
# Generate embedding for tests
query_text = 'Hello'
query_embedding = embeddings.embed_query(query_text)
print(f"Length of embedding: {len(query_embedding)}\nFirst two items in embedding: {query_embedding[:2]}")
# Test knn Search
knn_result = knn_search.knn_search(query_vector = query_embedding, k=2)
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
# Test hybrid search - Requires both query_text and query_vector
knn_result = knn_search.knn_hybrid_search(query_vector = query_embedding, query=query_text, k=2)
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
Test source option#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, source=False)
assert not '_source' in knn_result['hits']['hits'][0].keys()
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, source=False)
assert not '_source' in hybrid_result['hits']['hits'][0].keys()
Test fields option#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, fields=['text']) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-5 | assert 'text' in knn_result['hits']['hits'][0]['fields'].keys()
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, fields=['text'])
assert 'text' in hybrid_result['hits']['hits'][0]['fields'].keys()
Test with es client connection rather than cloud_id#
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=['https://es_cluster_url:port'],
basic_auth=('user', 'password')
)
# Instantiate ElasticsearchEmbeddings using es_connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
)
# Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
es_connection = es_connection,
index_name= test_index,
embedding= embeddings
)
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)
print(f"kNN search results for query '{query}': {knn_result}")
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
previous
DocArrayInMemorySearch
next
FAISS
Contents
ElasticSearch
ElasticVectorSearch class
Installation
Example
ElasticKnnSearch Class
Test adding vectors
Test knn search using query vector builder
Test knn search using pre generated vector
Test source option
Test fields option
Test with es client connection rather than cloud_id
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
3f40017b794a-6 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html |
b7f1992d2312-0 | .ipynb
.pdf
OpenSearch
Contents
Installation
similarity_search using Approximate k-NN
similarity_search using Script Scoring
similarity_search using Painless Scripting
Using a preexisting OpenSearch instance
OpenSearch#
OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.
This notebook shows how to use functionality related to the OpenSearch database.
To run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.
similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.
Check this for more details.
Installation#
Install the Python client.
!pip install opensearch-py
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
similarity_search using Approximate k-NN# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
b7f1992d2312-1 | embeddings = OpenAIEmbeddings()
similarity_search using Approximate k-NN#
similarity_search using Approximate k-NN Search with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(
docs,
embeddings,
opensearch_url="http://localhost:9200"
)
# If using the default Docker installation, use this instantiation instead:
# docsearch = OpenSearchVectorSearch.from_documents(
# docs,
# embeddings,
# opensearch_url="https://localhost:9200",
# http_auth=("admin", "admin"),
# use_ssl = False,
# verify_certs = False,
# ssl_assert_hostname = False,
# ssl_show_warn = False,
# )
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query, k=10)
print(docs[0].page_content)
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Script Scoring#
similarity_search using Script Scoring with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring") | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
b7f1992d2312-2 | print(docs[0].page_content)
similarity_search using Painless Scripting#
similarity_search using Painless Scripting with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter)
print(docs[0].page_content)
Using a preexisting OpenSearch instance#
It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present.
# this is just an example, you would need to change these values to point to another opensearch instance
docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200")
# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata
docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata")
previous
MyScale
next
PGVector
Contents
Installation
similarity_search using Approximate k-NN
similarity_search using Script Scoring
similarity_search using Painless Scripting
Using a preexisting OpenSearch instance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html |
df1dd20453d9-0 | .ipynb
.pdf
DocArrayInMemorySearch
Contents
Setup
Using DocArrayInMemorySearch
Similarity search
Similarity search with score
DocArrayInMemorySearch#
DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.
This notebook shows how to use functionality related to the DocArrayInMemorySearch.
Setup#
Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven’t already done so.
# !pip install "docarray"
# Get an OpenAI token: https://platform.openai.com/account/api-keys
# import os
# from getpass import getpass
# OPENAI_API_KEY = getpass()
# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Using DocArrayInMemorySearch#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.document_loaders import TextLoader
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
Similarity search#
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html |
df1dd20453d9-1 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}),
0.8154190158347903)
previous
DocArrayHnswSearch
next
ElasticSearch
Contents
Setup
Using DocArrayInMemorySearch
Similarity search
Similarity search with score
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html |
df1dd20453d9-2 | Similarity search
Similarity search with score
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html |
4067997422c2-0 | .ipynb
.pdf
Typesense
Contents
Similarity Search
Typesense as a Retriever
Typesense#
Typesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.
Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.
This notebook shows you how to use Typesense as your VectorStore.
Let’s first install our dependencies:
!pip install typesense openapi-schema-pydantic openai tiktoken
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Typesense
from langchain.document_loaders import TextLoader
Let’s import our test dataset:
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Typesense.from_documents(docs,
embeddings,
typesense_client_params={
'host': 'localhost', # Use xxx.a1.typesense.net for Typesense Cloud
'port': '8108', # Use 443 for Typesense Cloud
'protocol': 'http', # Use https for Typesense Cloud
'typesense_api_key': 'xyz', | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html |
4067997422c2-1 | 'typesense_api_key': 'xyz',
'typesense_collection_name': 'lang-chain'
})
Similarity Search#
query = "What did the president say about Ketanji Brown Jackson"
found_docs = docsearch.similarity_search(query)
print(found_docs[0].page_content)
Typesense as a Retriever#
Typesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = docsearch.as_retriever()
retriever
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0]
previous
Tigris
next
Vectara
Contents
Similarity Search
Typesense as a Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html |
56828f0d4b7f-0 | .ipynb
.pdf
SingleStoreDB vector search
SingleStoreDB vector search#
SingleStore DB is a high-performance distributed database that supports deployment both in the cloud and on-premises. For a significant duration, it has provided support for vector functions such as dot_product, thereby positioning itself as an ideal solution for AI applications that require text similarity matching.
This tutorial illustrates how to utilize the features of the SingleStore DB Vector Store.
# Establishing a connection to the database is facilitated through the singlestoredb Python connector.
# Please ensure that this connector is installed in your working environment.
!pip install singlestoredb
import os
import getpass
# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import SingleStoreDB
from langchain.document_loaders import TextLoader
# Load text samples
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
There are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor. Alternatively, you may provide these parameters to the from_documents and from_texts methods.
# Setup connection url as environment variable
os.environ['SINGLESTOREDB_URL'] = 'root:pass@localhost:3306/db'
# Load documents to the store
docsearch = SingleStoreDB.from_documents(
docs,
embeddings, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/singlestoredb.html |
56828f0d4b7f-1 | docsearch = SingleStoreDB.from_documents(
docs,
embeddings,
table_name = "noteook", # use table with a custom name
)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query) # Find documents that correspond to the query
print(docs[0].page_content)
previous
Redis
next
SKLearnVectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/singlestoredb.html |
4a709ad2f118-0 | .ipynb
.pdf
Vectara
Contents
Connecting to Vectara from LangChain
Similarity search
Similarity search with score
Vectara as a Retriever
Vectara#
Vectara is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy.
This notebook shows how to use functionality related to the Vectara vector database.
See the Vectara API documentation for more information on how to use the API.
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Vectara
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connecting to Vectara from LangChain#
The Vectara API provides simple API endpoints for indexing and querying.
vectara = Vectara.from_documents(docs, embedding=None)
Similarity search#
The simplest scenario for using Vectara is to perform a similarity search.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = vectara.similarity_search(query, n_sentence_context=0)
print(found_docs[0].page_content) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html |
4a709ad2f118-1 | print(found_docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = vectara.similarity_search_with_score(query)
document, score = found_docs[0]
print(document.page_content)
print(f"\nScore: {score}")
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html |
4a709ad2f118-2 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Score: 0.7129974
Vectara as a Retriever#
Vectara, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = vectara.as_retriever()
retriever
VectaraRetriever(vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x122db2830>, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '0'})
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
previous
Typesense
next
Weaviate
Contents | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html |
4a709ad2f118-3 | previous
Typesense
next
Weaviate
Contents
Connecting to Vectara from LangChain
Similarity search
Similarity search with score
Vectara as a Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/vectara.html |
e2819ed910c4-0 | .ipynb
.pdf
AnalyticDB
AnalyticDB#
AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.
This notebook shows how to use functionality related to the AnalyticDB vector database.
To run, you should have an AnalyticDB instance up and running:
Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import AnalyticDB
Split documents and get embeddings by call OpenAI API
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connect to AnalyticDB by setting related ENVIRONMENTS.
export PG_HOST={your_analyticdb_hostname}
export PG_PORT={your_analyticdb_port} # Optional, default is 5432
export PG_DATABASE={your_database} # Optional, default is postgres
export PG_USER={database_username}
export PG_PASSWORD={database_password}
Then store your embeddings and documents into AnalyticDB
import os
connection_string = AnalyticDB.connection_string_from_db_params( | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html |
e2819ed910c4-1 | import os
connection_string = AnalyticDB.connection_string_from_db_params(
driver=os.environ.get("PG_DRIVER", "psycopg2cffi"),
host=os.environ.get("PG_HOST", "localhost"),
port=int(os.environ.get("PG_PORT", "5432")),
database=os.environ.get("PG_DATABASE", "postgres"),
user=os.environ.get("PG_USER", "postgres"),
password=os.environ.get("PG_PASSWORD", "postgres"),
)
vector_db = AnalyticDB.from_documents(
docs,
embeddings,
connection_string= connection_string,
)
Query and retrieve data
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Getting Started
next
Annoy
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html |
9d4bfd23f2ab-0 | .ipynb
.pdf
FAISS
Contents
Similarity Search with score
Saving and loading
Merging
FAISS#
Facebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.
Faiss documentation.
This notebook shows how to use functionality related to the FAISS vector database.
#!pip install faiss
# OR
!pip install faiss-cpu
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization
# os.environ['FAISS_NO_AVX2'] = '1'
OpenAI API Key: ········
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html |
9d4bfd23f2ab-1 | docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity Search with score#
There are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.
docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html |
9d4bfd23f2ab-2 | docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0]
(Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
0.3914415)
It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.
embedding_vector = embeddings.embed_query(query)
docs_and_scores = db.similarity_search_by_vector(embedding_vector)
Saving and loading#
You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it.
db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
docs = new_db.similarity_search(query)
docs[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html |
9d4bfd23f2ab-3 | docs = new_db.similarity_search(query)
docs[0]
Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)
Merging#
You can also merge two FAISS vectorstores
db1 = FAISS.from_texts(["foo"], embeddings)
db2 = FAISS.from_texts(["bar"], embeddings)
db1.docstore._dict
{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0)}
db2.docstore._dict
{'bdc50ae3-a1bb-4678-9260-1b0979578f40': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}
db1.merge_from(db2)
db1.docstore._dict | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html |
9d4bfd23f2ab-4 | db1.merge_from(db2)
db1.docstore._dict
{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0),
'd5211050-c777-493d-8825-4800e74cfdb6': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}
previous
ElasticSearch
next
LanceDB
Contents
Similarity Search with score
Saving and loading
Merging
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html |
54e9dd38bf3c-0 | .ipynb
.pdf
Supabase (Postgres)
Contents
Similarity search with score
Retriever options
Maximal Marginal Relevance Searches
Supabase (Postgres)#
Supabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.
PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.
This notebook shows how to use Supabase and pgvector as your VectorStore.
To run this notebook, please ensure:
the pgvector extension is enabled
you have installed the supabase-py package
that you have created a match_documents function in your database
that you have a documents table in your public schema similar to the one below.
The following function determines cosine similarity, but you can adjust to your needs.
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int)
RETURNS TABLE(
id bigint,
content text,
metadata jsonb,
-- we return matched vectors to enable maximal marginal relevance searches
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGIN
RETURN query
SELECT
id,
content,
metadata,
embedding, | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-1 | SELECT
id,
content,
metadata,
embedding,
1 -(documents.embedding <=> query_embedding) AS similarity
FROM
documents
ORDER BY
documents.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
# with pip
!pip install supabase
# with conda
# !conda install -c conda-forge supabase
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['SUPABASE_URL'] = getpass.getpass('Supabase URL:')
os.environ['SUPABASE_SERVICE_KEY'] = getpass.getpass('Supabase Service Key:')
# If you're storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenv
from dotenv import load_dotenv
load_dotenv()
import os
from supabase.client import Client, create_client
supabase_url = os.environ.get("SUPABASE_URL")
supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")
supabase: Client = create_client(supabase_url, supabase_key)
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import SupabaseVectorStore
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-2 | docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
# We're using the default `documents` table here. You can modify this by passing in a `table_name` argument to the `from_documents` method.
vector_store = SupabaseVectorStore.from_documents(
docs, embeddings, client=supabase
)
query = "What did the president say about Ketanji Brown Jackson"
matched_docs = vector_store.similarity_search(query)
print(matched_docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
matched_docs = vector_store.similarity_search_with_relevance_scores(query)
matched_docs[0] | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-3 | matched_docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),
0.802509746274066)
Retriever options#
This section goes over different options for how to use SupabaseVectorStore as a retriever.
Maximal Marginal Relevance Searches#
In addition to using similarity search in the retriever object, you can also use mmr.
retriever = vector_store.as_retriever(search_type="mmr")
matched_docs = retriever.get_relevant_documents(query)
for i, d in enumerate(matched_docs):
print(f"\n## Document {i}\n")
print(d.page_content)
## Document 0
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-4 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
## Document 1
One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more.
When they came home, many of the world’s fittest and best trained warriors were never the same.
Headaches. Numbness. Dizziness.
A cancer that would put them in a flag-draped coffin.
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.
Stationed near Baghdad, just yards from burn pits the size of football fields.
Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.
## Document 2 | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-5 | ## Document 2
And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers.
Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.
America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.
These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming.
But I want you to know that we are going to be okay.
When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger.
While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly.
## Document 3
We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together.
I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
54e9dd38bf3c-6 | I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
previous
SKLearnVectorStore
next
Tair
Contents
Similarity search with score
Retriever options
Maximal Marginal Relevance Searches
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html |
63212a219bbe-0 | .ipynb
.pdf
MatchingEngine
Contents
Create VectorStore from texts
Create Index and deploy it to an Endpoint
Imports, Constants and Configs
Using Tensorflow Universal Sentence Encoder as an Embedder
Inserting a test embedding
Creating Index
Creating Endpoint
Deploy Index
MatchingEngine#
This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.
Vertex AI Matching Engine provides the industry’s leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.
Note: This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To see how to create an index refer to the section Create Index and deploy it to an Endpoint
Create VectorStore from texts#
from langchain.vectorstores import MatchingEngine
texts = ['The cat sat on', 'the mat.', 'I like to', 'eat pizza for', 'dinner.', 'The sun sets', 'in the west.']
vector_store = MatchingEngine.from_components(
texts=texts,
project_id="<my_project_id>",
region="<my_region>",
gcs_bucket_uri="<my_gcs_bucket>",
index_id="<my_matching_engine_index_id>",
endpoint_id="<my_matching_engine_endpoint_id>"
)
vector_store.add_texts(texts=texts)
vector_store.similarity_search("lunch", k=2)
Create Index and deploy it to an Endpoint#
Imports, Constants and Configs#
# Installing dependencies.
!pip install tensorflow \
google-cloud-aiplatform \
tensorflow-hub \
tensorflow-text
import os
import json
from google.cloud import aiplatform
import tensorflow_hub as hub
import tensorflow_text
PROJECT_ID = "<my_project_id>"
REGION = "<my_region>" | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html |
63212a219bbe-1 | import tensorflow_text
PROJECT_ID = "<my_project_id>"
REGION = "<my_region>"
VPC_NETWORK = "<my_vpc_network_name>"
PEERING_RANGE_NAME = "ann-langchain-me-range" # Name for creating the VPC peering.
BUCKET_URI = "gs://<bucket_uri>"
# The number of dimensions for the tensorflow universal sentence encoder.
# If other embedder is used, the dimensions would probably need to change.
DIMENSIONS = 512
DISPLAY_NAME = "index-test-name"
EMBEDDING_DIR = f"{BUCKET_URI}/banana"
DEPLOYED_INDEX_ID = "endpoint-test-name"
PROJECT_NUMBER = !gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = PROJECT_NUMBER[0]
VPC_NETWORK_FULL = f"projects/{PROJECT_NUMBER}/global/networks/{VPC_NETWORK}"
# Change this if you need the VPC to be created.
CREATE_VPC = False
# Set the project id
! gcloud config set project {PROJECT_ID}
# Remove the if condition to run the encapsulated code
if CREATE_VPC:
# Create a VPC network
! gcloud compute networks create {VPC_NETWORK} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID}
# Add necessary firewall rules
! gcloud compute firewall-rules create {VPC_NETWORK}-allow-icmp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow icmp
! gcloud compute firewall-rules create {VPC_NETWORK}-allow-internal --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9 | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html |
63212a219bbe-2 | ! gcloud compute firewall-rules create {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:3389
! gcloud compute firewall-rules create {VPC_NETWORK}-allow-ssh --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:22
# Reserve IP range
! gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={VPC_NETWORK} --purpose=VPC_PEERING --project={PROJECT_ID} --description="peering range"
# Set up peering with service networking
# Your account must have the "Compute Network Admin" role to run the following.
! gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={VPC_NETWORK} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}
# Creating bucket.
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
Using Tensorflow Universal Sentence Encoder as an Embedder#
# Load the Universal Sentence Encoder module
module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
model = hub.load(module_url)
# Generate embeddings for each word
embeddings = model(['banana'])
Inserting a test embedding#
initial_config = {"id": "banana_id", "embedding": [float(x) for x in list(embeddings.numpy()[0])]}
with open("data.json", "w") as f:
json.dump(initial_config, f)
!gsutil cp data.json {EMBEDDING_DIR}/file.json
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Creating Index# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html |
63212a219bbe-3 | Creating Index#
my_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDING_DIR,
dimensions=DIMENSIONS,
approximate_neighbors_count=150,
distance_measure_type="DOT_PRODUCT_DISTANCE"
)
Creating Endpoint#
my_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name=f"{DISPLAY_NAME}-endpoint",
network=VPC_NETWORK_FULL,
)
Deploy Index#
my_index_endpoint = my_index_endpoint.deploy_index(
index=my_index,
deployed_index_id=DEPLOYED_INDEX_ID
)
my_index_endpoint.deployed_indexes
previous
LanceDB
next
Milvus
Contents
Create VectorStore from texts
Create Index and deploy it to an Endpoint
Imports, Constants and Configs
Using Tensorflow Universal Sentence Encoder as an Embedder
Inserting a test embedding
Creating Index
Creating Endpoint
Deploy Index
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/matchingengine.html |
3e8ffb678ec1-0 | .ipynb
.pdf
LanceDB
LanceDB#
LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.
This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.
!pip install lancedb
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import LanceDB
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
documents = CharacterTextSplitter().split_documents(documents)
embeddings = OpenAIEmbeddings()
import lancedb
db = lancedb.connect('/tmp/lancedb')
table = db.create_table("my_table", data=[
{"vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1"}
], mode="overwrite")
docsearch = LanceDB.from_documents(documents, embeddings, connection=table)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html |
3e8ffb678ec1-1 | I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope.
We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities.
I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.
And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced.
And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon?
Ban assault weapons and high-capacity magazines.
Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued.
These laws don’t infringe on the Second Amendment. They save lives. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html |
3e8ffb678ec1-2 | These laws don’t infringe on the Second Amendment. They save lives.
The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault.
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html |
3e8ffb678ec1-3 | We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
previous
FAISS
next
MatchingEngine
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lancedb.html |
422cc08f47b4-0 | .ipynb
.pdf
Qdrant
Contents
Connecting to Qdrant from LangChain
Local mode
In-memory
On-disk storage
On-premise server deployment
Qdrant Cloud
Reusing the same collection
Similarity search
Similarity search with score
Metadata filtering
Maximum marginal relevance search (MMR)
Qdrant as a Retriever
Customizing Qdrant
Qdrant#
Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.
This notebook shows how to use functionality related to the Qdrant vector database.
There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:
Local mode, no server required
On-premise server deployment
Qdrant Cloud
See the installation instructions.
!pip install qdrant-client
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key: ········
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
422cc08f47b4-1 | docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connecting to Qdrant from LangChain#
Local mode#
Python client allows you to run the same code in local mode without running the Qdrant server. That’s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.
In-memory#
For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.
qdrant = Qdrant.from_documents(
docs, embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
On-disk storage#
Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.
qdrant = Qdrant.from_documents(
docs, embeddings,
path="/tmp/local_qdrant",
collection_name="my_documents",
)
On-premise server deployment#
No matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service.
url = "<---qdrant url here --->"
qdrant = Qdrant.from_documents(
docs, embeddings,
url, prefer_grpc=True,
collection_name="my_documents",
)
Qdrant Cloud# | https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.