id
stringlengths 14
16
| text
stringlengths 44
2.73k
| source
stringlengths 49
114
|
---|---|---|
d1ab74b19ad5-0
|
.ipynb
.pdf
Milvus
Milvus#
This notebook shows how to use functionality related to the Milvus vector database.
To run, you should have a Milvus instance up and running: https://milvus.io/docs/install_standalone-docker.md
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
docs = vector_db.similarity_search(query)
docs[0]
previous
FAISS
next
MyScale
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html
|
376cdfb06ecc-0
|
.ipynb
.pdf
Annoy
Contents
Create VectorStore from texts
Create VectorStore from docs
Create VectorStore via existing embeddings
Search via embeddings
Search via docstore id
Save and load
Construct from scratch
Annoy#
This notebook shows how to use functionality related to the Annoy vector database.
“Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.”
via Annoy
Note
Annoy is read-only - once the index is built you cannot add any more emebddings!
If you want to progressively add to your VectorStore then better choose an alternative!
Create VectorStore from texts#
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Annoy
embeddings_func = HuggingFaceEmbeddings()
texts = ["pizza is great", "I love salad", "my car", "a dog"]
# default metric is angular
vector_store = Annoy.from_texts(texts, embeddings_func)
# allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric="angular"
vector_store_v2 = Annoy.from_texts(
texts, embeddings_func, metric="dot", n_trees=100, n_jobs=1
)
vector_store.similarity_search("food", k=3)
[Document(page_content='pizza is great', metadata={}),
Document(page_content='I love salad', metadata={}),
Document(page_content='my car', metadata={})]
# the score is a distance metric, so lower is better
vector_store.similarity_search_with_score("food", k=3)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-1
|
vector_store.similarity_search_with_score("food", k=3)
[(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),
(Document(page_content='I love salad', metadata={}), 1.1273186206817627),
(Document(page_content='my car', metadata={}), 1.1580758094787598)]
Create VectorStore from docs#
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
docs[:5]
[Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-2
|
Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-3
|
Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-4
|
Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-5
|
Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})]
vector_store_from_docs = Annoy.from_documents(docs, embeddings_func)
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_store_from_docs.similarity_search(query)
print(docs[0].page_content[:100])
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Ac
Create VectorStore via existing embeddings#
embs = embeddings_func.embed_documents(texts)
data = list(zip(texts, embs))
vector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func)
vector_store_from_embeddings.similarity_search_with_score("food", k=3)
[(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-6
|
(Document(page_content='I love salad', metadata={}), 1.1273186206817627),
(Document(page_content='my car', metadata={}), 1.1580758094787598)]
Search via embeddings#
motorbike_emb = embeddings_func.embed_query("motorbike")
vector_store.similarity_search_by_vector(motorbike_emb, k=3)
[Document(page_content='my car', metadata={}),
Document(page_content='a dog', metadata={}),
Document(page_content='pizza is great', metadata={})]
vector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3)
[(Document(page_content='my car', metadata={}), 1.0870471000671387),
(Document(page_content='a dog', metadata={}), 1.2095637321472168),
(Document(page_content='pizza is great', metadata={}), 1.3254905939102173)]
Search via docstore id#
vector_store.index_to_docstore_id
{0: '2d1498a8-a37c-4798-acb9-0016504ed798',
1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d',
2: '927f1120-985b-4691-b577-ad5cb42e011c',
3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'}
some_docstore_id = 0 # texts[0]
vector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]]
Document(page_content='pizza is great', metadata={})
# same document has distance 0
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-7
|
Document(page_content='pizza is great', metadata={})
# same document has distance 0
vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3)
[(Document(page_content='pizza is great', metadata={}), 0.0),
(Document(page_content='I love salad', metadata={}), 1.0734446048736572),
(Document(page_content='my car', metadata={}), 1.2895267009735107)]
Save and load#
vector_store.save_local("my_annoy_index_and_docstore")
saving config
loaded_vector_store = Annoy.load_local(
"my_annoy_index_and_docstore", embeddings=embeddings_func
)
# same document has distance 0
loaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3)
[(Document(page_content='pizza is great', metadata={}), 0.0),
(Document(page_content='I love salad', metadata={}), 1.0734446048736572),
(Document(page_content='my car', metadata={}), 1.2895267009735107)]
Construct from scratch#
import uuid
from annoy import AnnoyIndex
from langchain.docstore.document import Document
from langchain.docstore.in_memory import InMemoryDocstore
metadatas = [{"x": "food"}, {"x": "food"}, {"x": "stuff"}, {"x": "animal"}]
# embeddings
embeddings = embeddings_func.embed_documents(texts)
# embedding dim
f = len(embeddings[0])
# index
metric = "angular"
index = AnnoyIndex(f, metric=metric)
for i, emb in enumerate(embeddings):
index.add_item(i, emb)
index.build(10)
# docstore
documents = []
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
376cdfb06ecc-8
|
index.build(10)
# docstore
documents = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
documents.append(Document(page_content=text, metadata=metadata))
index_to_docstore_id = {i: str(uuid.uuid4()) for i in range(len(documents))}
docstore = InMemoryDocstore(
{index_to_docstore_id[i]: doc for i, doc in enumerate(documents)}
)
db_manually = Annoy(
embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id
)
db_manually.similarity_search_with_score("eating!", k=3)
[(Document(page_content='pizza is great', metadata={'x': 'food'}),
1.1314140558242798),
(Document(page_content='I love salad', metadata={'x': 'food'}),
1.1668788194656372),
(Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)]
previous
AnalyticDB
next
AtlasDB
Contents
Create VectorStore from texts
Create VectorStore from docs
Create VectorStore via existing embeddings
Search via embeddings
Search via docstore id
Save and load
Construct from scratch
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
|
63e46f1d18d7-0
|
.ipynb
.pdf
Redis
Contents
RedisVectorStoreRetriever
Redis#
This notebook shows how to use functionality related to the Redis vector database.
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.redis import Redis
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link')
rds.index_name
'link'
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
print(rds.add_texts(["Ankush went to Princeton"]))
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
|
63e46f1d18d7-1
|
print(rds.add_texts(["Ankush went to Princeton"]))
['doc:link:d7d02e3faf1b40bbbe29a683ff75b280']
query = "Princeton"
results = rds.similarity_search(query)
print(results[0].page_content)
Ankush went to Princeton
# Load from existing index
rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link')
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
RedisVectorStoreRetriever#
Here we go over different options for using the vector store as a retriever.
There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.
retriever = rds.as_retriever()
docs = retriever.get_relevant_documents(query)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
|
63e46f1d18d7-2
|
docs = retriever.get_relevant_documents(query)
We can also use similarity_limit as a search method. This is only return documents if they are similar enough
retriever = rds.as_retriever(search_type="similarity_limit")
# Here we can see it doesn't return any results because there are no relevant documents
retriever.get_relevant_documents("where did ankush go to college?")
previous
Qdrant
next
SupabaseVectorStore
Contents
RedisVectorStoreRetriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
|
cf7792123cf4-0
|
.ipynb
.pdf
Weaviate
Weaviate#
This notebook shows how to use functionality related to the Weaviate vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import weaviate
import os
WEAVIATE_URL = ""
client = weaviate.Client(
url=WEAVIATE_URL,
additional_headers={
'X-OpenAI-Api-Key': os.environ["OPENAI_API_KEY"]
}
)
client.schema.delete_all()
client.schema.get()
schema = {
"classes": [
{
"class": "Paragraph",
"description": "A written paragraph",
"vectorizer": "text2vec-openai",
"moduleConfig": {
"text2vec-openai": {
"model": "babbage",
"type": "text"
}
},
"properties": [
{
"dataType": ["text"],
"description": "The content of the paragraph",
"moduleConfig": {
"text2vec-openai": {
"skip": False,
"vectorizePropertyName": False
}
},
"name": "content",
},
],
},
]
}
client.schema.create(schema)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
cf7792123cf4-1
|
},
],
},
]
}
client.schema.create(schema)
vectorstore = Weaviate(client, "Paragraph", "content")
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].page_content)
previous
SupabaseVectorStore
next
Zilliz
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
bfa20afd25bd-0
|
.ipynb
.pdf
MyScale
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Deleting your data
MyScale#
This notebook shows how to use functionality related to the MyScale vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import MyScale
from langchain.document_loaders import TextLoader
Setting up envrionments#
There are two ways to set up parameters for myscale index.
Environment Variables
Before you run the app, please set the environment variable with export:
export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...
You can easily find your account, password and other info on our SaaS. For details please refer to this document
Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.
Create MyScaleSettings object with parameters
from langchain.vectorstores import MyScale, MyScaleSettings
config = MyScaleSetting(host="<your-backend-url>", port=8443, ...)
index = MyScale(embedding_function, config)
index.add_documents(...)
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for d in docs:
d.metadata = {'some': 'metadata'}
docsearch = MyScale.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
bfa20afd25bd-1
|
docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:18<00:00, 2.21it/s]
print(docs[0].page_content)
As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.
It’s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children.
And let’s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care.
Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
Get connection info and data schema#
print(str(docsearch))
Filtering#
You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
If you custimized your column_map under your setting, you search with filter like this:
from langchain.vectorstores import MyScale, MyScaleSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
bfa20afd25bd-2
|
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = MyScale.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.69it/s]
meta = docsearch.metadata_column
output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?',
k=4, where_str=f"{meta}.doc_id<10")
for d, dist in output:
print(dist, d.metadata, d.page_content[:20] + '...')
0.252379834651947 {'doc_id': 6, 'some': ''} And I’m taking robus...
0.25022566318511963 {'doc_id': 1, 'some': ''} Groups of citizens b...
0.2469480037689209 {'doc_id': 8, 'some': ''} And so many families...
0.2428302764892578 {'doc_id': 0, 'some': 'metadata'} As Frances Haugen, w...
Deleting your data#
docsearch.drop()
previous
Milvus
next
OpenSearch
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Deleting your data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
42ee0f7f1f92-0
|
.ipynb
.pdf
OpenSearch
Contents
similarity_search using Approximate k-NN Search with Custom Parameters
similarity_search using Script Scoring with Custom Parameters
similarity_search using Painless Scripting with Custom Parameters
Using a preexisting OpenSearch instance
OpenSearch#
This notebook shows how to use functionality related to the OpenSearch database.
To run, you should have the opensearch instance up and running: here
similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.
Check this for more details.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200")
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Approximate k-NN Search with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
42ee0f7f1f92-1
|
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Script Scoring with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring")
print(docs[0].page_content)
similarity_search using Painless Scripting with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter)
print(docs[0].page_content)
Using a preexisting OpenSearch instance#
It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present.
# this is just an example, you would need to change these values to point to another opensearch instance
docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200")
# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
42ee0f7f1f92-2
|
docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata")
previous
MyScale
next
PGVector
Contents
similarity_search using Approximate k-NN Search with Custom Parameters
similarity_search using Script Scoring with Custom Parameters
similarity_search using Painless Scripting with Custom Parameters
Using a preexisting OpenSearch instance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
87c448b8399e-0
|
.ipynb
.pdf
PGVector
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
PGVector#
This notebook shows how to use functionality related to the Postgres vector database (PGVector).
## Loading Environment Variables
from typing import List, Tuple
from dotenv import load_dotenv
load_dotenv()
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector
from langchain.document_loaders import TextLoader
from langchain.docstore.document import Document
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
## PGVector needs the connection string to the database.
## We will load it from the environment variables.
import os
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("PGVECTOR_PORT", "5432")),
database=os.environ.get("PGVECTOR_DATABASE", "postgres"),
user=os.environ.get("PGVECTOR_USER", "postgres"),
password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),
)
## Example
# postgresql+psycopg2://username:password@localhost:5432/database_name
Similarity search with score#
Similarity Search with Euclidean Distance (Default)#
# The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the
# permission to create a table.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
|
87c448b8399e-1
|
# permission to create a table.
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="state_of_the_union",
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
--------------------------------------------------------------------------------
Score: 0.6076628081132506
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076628081132506
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
|
87c448b8399e-2
|
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804780049968
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804780049968
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
|
87c448b8399e-3
|
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
previous
OpenSearch
next
Pinecone
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
|
2910e6085ad7-0
|
.ipynb
.pdf
Deep Lake
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
Deep Lake#
This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks.
For more information, please see the Deep Lake documentation or api reference
!python3 -m pip install openai deeplake tiktoken
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DeepLake
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
embeddings = OpenAIEmbeddings()
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Creates a dataset locally at ./deeplake/, then runs similiarity search
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings, overwrite=True)
db.add_documents(docs)
# or shorter
# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)
query = "What did the president say about Ketanji Brown Jackson"
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-1
|
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Later, you can reload the dataset without recomputing embeddings
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-2
|
docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage
Dataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
Deep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquring the writer lock.
Retrieval Question/Answering#
from langchain.chains import RetrievalQA
from langchain.llms import OpenAIChat
qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever())
/media/sdb/davit/Git/experiments/langchain/langchain/llms/openai.py:672: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
query = 'What did the president say about Ketanji Brown Jackson'
qa.run(query)
"The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court, describing her as one of the nation's top legal minds and a consensus builder with a background in private practice and public defense, and noting that she has received broad support from both Democrats and Republicans."
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-3
|
Attribute based filtering in metadata#
import random
for d in docs:
d.metadata['year'] = random.randint(2012, 2014)
db = DeepLake.from_documents(docs, embeddings, dataset_path="./my_deeplake/", overwrite=True)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})
100%|██████████| 4/4 [00:00<00:00, 1080.24it/s]
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-4
|
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-5
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Choosing distance function#
Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product
db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos')
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-6
|
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-7
|
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-8
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-9
|
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012})]
Maximal Marginal relevance#
Using maximal marginal relevance
db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?')
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-10
|
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-11
|
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-12
|
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-13
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Delete dataset#
db.delete_dataset()
and if delete fails you can also force delete
DeepLake.force_delete_by_path("./my_deeplake")
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local#
By default deep lake datasets are stored in memory, in case you want to persist locally or to any object storage you can simply provide path to the dataset. You can retrieve token from app.activeloop.ai
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
# Embed and store the texts
username = "<username>" # your username on app.activeloop.ai
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-14
|
username = "<username>" # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_test" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True)
db.add_documents(docs)
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test
hub://davitbun/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:14<00:00
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
['d6d6ccb4-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb5-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb6-e187-11ed-b66d-41c5f7b85421',
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-15
|
'd6d6ccb7-e187-11ed-b66d-41c5f7b85421']
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Creating dataset on AWS S3#
dataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds = {
'aws_access_key_id': os.environ['AWS_ACCESS_KEY_ID'],
'aws_secret_access_key': os.environ['AWS_SECRET_ACCESS_KEY'],
'aws_session_token': os.environ['AWS_SESSION_TOKEN'], # Optional
})
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-16
|
})
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00
\
Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
Deep Lake API#
you can access the Deep Lake dataset at db.ds
# get structure of the dataset
db.ds.summary()
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
# get embeddings numpy array
embeds = db.ds.embedding.numpy()
Transfer local dataset to cloud#
Copy already created dataset to the cloud. You can also transfer from cloud to local.
import deeplake
username = "davitbun" # your username on app.activeloop.ai
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-17
|
username = "davitbun" # your username on app.activeloop.ai
source = f"hub://{username}/langchain_test" # could be local, s3, gcs, etc.
destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.
deeplake.deepcopy(src=source, dest=destination, overwrite=True)
Copying dataset: 100%|██████████| 56/56 [00:38<00:00
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
db = DeepLake(dataset_path=destination, embedding_function=embeddings)
db.add_documents(docs)
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
/
hub://davitbun/langchain_test_copy loaded successfully.
Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
2910e6085ad7-18
|
metadata json (4, 1) str None
text text (4, 1) str None
Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00
-
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (8, 1536) float32 None
ids text (8, 1) str None
metadata json (8, 1) str None
text text (8, 1) str None
['ad42f3fe-e188-11ed-b66d-41c5f7b85421',
'ad42f3ff-e188-11ed-b66d-41c5f7b85421',
'ad42f400-e188-11ed-b66d-41c5f7b85421',
'ad42f401-e188-11ed-b66d-41c5f7b85421']
previous
Chroma
next
ElasticSearch
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
|
12e2081ef580-0
|
.ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
import weaviate
import os
WEAVIATE_URL = "..."
client = weaviate.Client(
url=WEAVIATE_URL,
)
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
retriever = WeaviateHybridSearchRetriever(client, index_name="LangChain", text_key="text")
docs = [Document(page_content="foo")]
retriever.add_documents(docs)
['3f79d151-fb84-44cf-85e0-8682bfe145e0']
retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={})]
previous
VectorStore Retriever
next
Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
4de9b1dd4ec2-0
|
.ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous
ElasticSearch BM25
next
Pinecone Hybrid Search
Contents
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html
|
4de9b1dd4ec2-1
|
previous
ElasticSearch BM25
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html
|
cfc6c35d5d13-0
|
.ipynb
.pdf
Pinecone Hybrid Search
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
Pinecone Hybrid Search#
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from this documentaion
from langchain.retrievers import PineconeHybridSearchRetriever
Setup Pinecone#
You should only have to do this part once.
Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs.
import os
import pinecone
api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"
# find environment next to your API key in the Pinecone console
env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"
index_name = "langchain-pinecone-hybrid-search"
pinecone.init(api_key=api_key, enviroment=env)
pinecone.whoami()
WhoAmIResponse(username='load', user_label='label', projectname='load-test')
# create the index
pinecone.create_index(
name = index_name,
dimension = 1536, # dimensionality of dense model
metric = "dotproduct", # sparse values supported only for dotproduct
pod_type = "s1",
metadata_config={"indexed": []} # see explaination above
)
Now that its created, we can use it
index = pinecone.Index(index_name)
Get embeddings and sparse encoders#
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
cfc6c35d5d13-1
|
index = pinecone.Index(index_name)
Get embeddings and sparse encoders#
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.
For more information about the sparse encoders you can checkout pinecone-text library docs.
from pinecone_text.sparse import BM25Encoder
# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE
# use default tf-idf values
bm25_encoder = BM25Encoder().default()
The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:
corpus = ["foo", "bar", "world", "hello"]
# fit tf-idf values on your corpus
bm25_encoder.fit(corpus)
# store the values to a json file
bm25_encoder.dump("bm25_values.json")
# load to your BM25Encoder object
bm25_encoder = BM25Encoder().load("bm25_values.json")
Load Retriever#
We can now construct the retriever!
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello"])
100%|██████████| 1/1 [00:02<00:00, 2.27s/it]
Use Retriever#
We can now use the retriever!
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
cfc6c35d5d13-2
|
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result[0]
Document(page_content='foo', metadata={})
previous
Metal
next
SVM Retriever
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
|
3419869cb85e-0
|
.ipynb
.pdf
ElasticSearch BM25
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
ElasticSearch BM25#
This notebook goes over how to use a retriever that under the hood uses ElasticSearcha and BM25.
For more information on the details of BM25 see this blog post.
from langchain.retrievers import ElasticSearchBM25Retriever
Create New Retriever#
elasticsearch_url="http://localhost:9200"
retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
# Alternatively, you can load an existing index
# import elasticsearch
# elasticsearch_url="http://localhost:9200"
# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',
'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
'8631bfc8-7c12-48ee-ab56-8ad5f373676e',
'8be8374c-3253-4d87-928d-d73550a2ecf0',
'd79f457b-2842-4eab-ae10-77aa420b53d7']
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
|
3419869cb85e-1
|
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})]
previous
Databerry
next
Metal
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
|
a68c7942261c-0
|
.ipynb
.pdf
TF-IDF Retriever
Contents
Create New Retriever with Texts
Use Retriever
TF-IDF Retriever#
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn.
For more information on the details of TF-IDF see this blog post.
from langchain.retrievers import TFIDFRetriever
# !pip install scikit-learn
Create New Retriever with Texts#
retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
SVM Retriever
next
Time Weighted VectorStore Retriever
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf_retriever.html
|
fdc67777964e-0
|
.ipynb
.pdf
ChatGPT Plugin Retriever
Contents
Create
Using the ChatGPT Retriever Plugin
ChatGPT Plugin Retriever#
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
Create#
First, let’s go over how to create the ChatGPT Retriever Plugin.
To set up the ChatGPT Retriever Plugin, please follow instructions here.
You can also create the ChatGPT Retriever Plugin from LangChain document loaders. The below code walks through how to do that.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
from typing import List
from langchain.docstore.document import Document
import json
def write_json(path: str, documents: List[Document])-> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json
Using the ChatGPT Retriever Plugin#
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
|
fdc67777964e-1
|
The below code walks through how to do that.
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
|
fdc67777964e-2
|
Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
previous
Retrievers
next
Contextual Compression Retriever
Contents
Create
Using the ChatGPT Retriever Plugin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
|
ac68ec9ea98d-0
|
.ipynb
.pdf
SVM Retriever
Contents
Create New Retriever with Texts
Use Retriever
SVM Retriever#
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
# !pip install scikit-learn
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Pinecone Hybrid Search
next
TF-IDF Retriever
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm_retriever.html
|
c68d6e7f8ee9-0
|
.ipynb
.pdf
Time Weighted VectorStore Retriever
Contents
Low Decay Rate
High Decay Rate
Time Weighted VectorStore Retriever#
This retriever uses a combination of semantic similarity and recency.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ** hours_passed
Notably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain “fresh.”
import faiss
from datetime import datetime, timedelta
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.schema import Document
from langchain.vectorstores import FAISS
Low Decay Rate#
A low decay rate (in this, to be extreme, we will set close to 0) means memories will be “remembered” for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")])
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
|
c68d6e7f8ee9-1
|
retriever.add_documents([Document(page_content="hello foo")])
['5c9f7c06-c9eb-45f2-aea5-efce5fb9f2bd']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough
retriever.get_relevant_documents("hello world")
[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 1, 966261), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 0, 374683), 'buffer_idx': 0})]
High Decay Rate#
With a high decay factor (e.g., several 9’s), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")])
['40011466-5bbe-4101-bfd1-e22e7f505de2']
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
|
c68d6e7f8ee9-2
|
# "Hello Foo" is returned first because "hello world" is mostly forgotten
retriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]
previous
TF-IDF Retriever
next
VectorStore Retriever
Contents
Low Decay Rate
High Decay Rate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
|
69c0270812a2-0
|
.ipynb
.pdf
Contextual Compression Retriever
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
Contextual Compression Retriever#
This notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned.
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Using a vanilla vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-1
|
texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-2
|
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
----------------------------------------------------------------------------------------------------
Document 4:
Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers.
And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up.
That ends on my watch.
Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect.
We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.
Let’s pass the Paycheck Fairness Act and paid leave.
Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-3
|
Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an LLMChainExtractor#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence."
----------------------------------------------------------------------------------------------------
Document 2:
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
More built-in compressors: filters#
LLMChainFilter#
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-4
|
More built-in compressors: filters#
LLMChainFilter#
The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compressors import LLMChainFilter
_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
EmbeddingsFilter#
Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.document_compressors import EmbeddingsFilter
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-5
|
from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-6
|
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
Stringing compressors and document transformers together#
Using the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don’t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-7
|
Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, redundant_filter, relevant_filter]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder
previous
ChatGPT Plugin Retriever
next
Databerry
Contents
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
69c0270812a2-8
|
previous
ChatGPT Plugin Retriever
next
Databerry
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
|
6ec9ca2c8b1f-0
|
.ipynb
.pdf
VectorStore Retriever
VectorStore Retriever#
The index - and therefore the retriever - that LangChain has the most support for is a VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.
Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example.
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(texts, embeddings)
Exiting: Cleaning up .chroma directory
retriever = db.as_retriever()
docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")
By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type.
retriever = db.as_retriever(search_type="mmr")
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
You can also specify search kwargs like k to use when doing retrieval.
retriever = db.as_retriever(search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
len(docs)
1
previous
Time Weighted VectorStore Retriever
next
Weaviate Hybrid Search
By Harrison Chase
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html
|
6ec9ca2c8b1f-1
|
next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html
|
f4a4db54e1e9-0
|
.ipynb
.pdf
Databerry
Contents
Query
Databerry#
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
retriever.get_relevant_documents("What is Daftpage?")
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
f4a4db54e1e9-1
|
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
f4a4db54e1e9-2
|
Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
previous
Contextual Compression Retriever
next
ElasticSearch BM25
Contents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
|
0c5b9a48ced3-0
|
.ipynb
.pdf
Getting Started
Getting Started#
The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]
In addition to controlling which characters you can split on, you can also control a few other things:
length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it’s pretty common to pass a token counter here.
chunk_size: the maximum size of your chunks (as measured by the length function).
chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).
# This is a long document we can split up.
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
previous
Text Splitters
next
Character Text Splitter
By Harrison Chase
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html
|
0c5b9a48ced3-1
|
previous
Text Splitters
next
Character Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html
|
395c7efdf406-0
|
.ipynb
.pdf
NLTK Text Splitter
NLTK Text Splitter#
Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers.
How the text is split: by NLTK
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies.
previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html
|
395c7efdf406-1
|
previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html
|
fbcfb59b0fc7-0
|
.ipynb
.pdf
Markdown Text Splitter
Markdown Text Splitter#
MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default.
How the text is split: by list of markdown specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import MarkdownTextSplitter
markdown_text = """
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability ⚡
## Quick Install
```bash
# Hopefully this code block isn't split
pip install langchain
```
As an open source project in a rapidly developing field, we are extremely open to contributions.
"""
markdown_splitter = MarkdownTextSplitter(chunk_size=100, chunk_overlap=0)
docs = markdown_splitter.create_documents([markdown_text])
docs
[Document(page_content='# 🦜️🔗 LangChain\n\n⚡ Building applications with LLMs through composability ⚡', lookup_str='', metadata={}, lookup_index=0),
Document(page_content="Quick Install\n\n```bash\n# Hopefully this code block isn't split\npip install langchain", lookup_str='', metadata={}, lookup_index=0),
Document(page_content='As an open source project in a rapidly developing field, we are extremely open to contributions.', lookup_str='', metadata={}, lookup_index=0)]
previous
Latex Text Splitter
next
NLTK Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html
|
33d617a24633-0
|
.ipynb
.pdf
RecursiveCharacterTextSplitter
RecursiveCharacterTextSplitter#
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
How the text is split: by list of characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
previous
Python Code Text Splitter
next
Spacy Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html
|
bf8af6c5f44d-0
|
.ipynb
.pdf
TiktokenText Splitter
TiktokenText Splitter#
How the text is split: by tiktoken tokens
How the chunk size is measured: by tiktoken tokens
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our
previous
tiktoken (OpenAI) Length Function
next
Vectorstores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html
|
23cfef31e6f1-0
|
.ipynb
.pdf
Spacy Text Splitter
Spacy Text Splitter#
Another alternative to NLTK is to use Spacy.
How the text is split: by Spacy
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import SpacyTextSplitter
text_splitter = SpacyTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
previous
RecursiveCharacterTextSplitter
next
tiktoken (OpenAI) Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html
|
23cfef31e6f1-1
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html
|
2139abeae8d6-0
|
.ipynb
.pdf
Python Code Text Splitter
Python Code Text Splitter#
PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default.
How the text is split: by list of python specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import PythonCodeTextSplitter
python_text = """
class Foo:
def bar():
def foo():
def testing_func():
def bar():
"""
python_splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0)
docs = python_splitter.create_documents([python_text])
docs
[Document(page_content='Foo:\n\n def bar():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='foo():\n\ndef testing_func():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='bar():', lookup_str='', metadata={}, lookup_index=0)]
previous
NLTK Text Splitter
next
RecursiveCharacterTextSplitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/python.html
|
c82ab7c3e5f6-0
|
.ipynb
.pdf
Character Text Splitter
Character Text Splitter#
This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters.
How the text is split: by single character
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
|
c82ab7c3e5f6-1
|
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0
Here’s an example of passing metadata along with the documents, notice that it is split along with the documents.
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)
print(documents[0])
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
|
c82ab7c3e5f6-2
|
print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0
previous
Getting Started
next
Hugging Face Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
|
566636a32480-0
|
.ipynb
.pdf
Hugging Face Length Function
Hugging Face Length Function#
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use Hugging Face tokenizers to count the text length.
How the text is split: by character passed in
How the chunk size is measured: by Hugging Face tokenizer
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Character Text Splitter
next
Latex Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html
|
63914e4e54f5-0
|
.ipynb
.pdf
tiktoken (OpenAI) Length Function
tiktoken (OpenAI) Length Function#
You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken tokenizer
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Spacy Text Splitter
next
TiktokenText Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html
|
3440e9cd3eb5-0
|
.ipynb
.pdf
Latex Text Splitter
Latex Text Splitter#
LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default.
How the text is split: by list of latex specific tags
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import LatexTextSplitter
latex_text = """
\documentclass{article}
\begin{document}
\maketitle
\section{Introduction}
Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.
\subsection{History of LLMs}
The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.
\subsection{Applications of LLMs}
LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.
\end{document}
"""
latex_splitter = LatexTextSplitter(chunk_size=400, chunk_overlap=0)
docs = latex_splitter.create_documents([latex_text])
docs
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html
|
3440e9cd3eb5-1
|
docs = latex_splitter.create_documents([latex_text])
docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='History of LLMs}\nThe earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Applications of LLMs}\nLLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n\n\\end{document}', lookup_str='', metadata={}, lookup_index=0)]
previous
Hugging Face Length Function
next
Markdown Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html
|
48c01de466fb-0
|
.rst
.pdf
How-To Guides
How-To Guides#
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either prompts, models, arbitrary functions, or other chains.
The examples here are broken up into three sections:
Generic Functionality
Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
Async API for Chain
Loading from LangChainHub
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
Sequential Chains
Serialization
Transformation Chain
Index-related Chains
Chains related to working with indexes.
Analyze Document
Chat Over Documents with Chat History
Graph QA
Hypothetical Document Embeddings
Question Answering with Sources
Question Answering
Summarization
Retrieval Question/Answering
Retrieval Question Answering with Sources
Vector DB Text Generation
All other chains
All other types of chains!
API Chains
Self-Critique Chain with Constitutional AI
BashChain
LLMCheckerChain
LLM Math
LLMRequestsChain
LLMSummarizationCheckerChain
Moderation
OpenAPI Chain
PAL
SQL Chain example
previous
Getting Started
next
Async API for Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/chains/how_to_guides.html
|
504d9508b6bc-0
|
.ipynb
.pdf
Getting Started
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
Getting Started#
In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.
In this tutorial, we will cover:
Using a simple LLM chain
Creating sequential chains
Creating a custom chain
Why do we need chains?#
Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.
Quick start: Using LLMChain#
The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.
To use the LLMChain, first create a prompt template.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
Cheerful Toes.
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-1
|
print(chain.run("colorful socks"))
Cheerful Toes.
You can use a chat model in an LLMChain as well:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
Rainbow Footwear Co.
Different ways of calling chains#
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:
chat = ChatOpenAI(temperature=0)
prompt_template = "Tell me a {adjective} joke"
llm_chain = LLMChain(
llm=chat,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain(inputs={"adjective":"lame"})
{'adjective': 'lame',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.
llm_chain("lame", return_only_outputs=True)
{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-2
|
{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
If the Chain only takes one input key (i.e. only has one element in its input_variables), you can use run method. Note that run outputs a string instead of a dictionary.
llm_chain.run({"adjective":"lame"})
'Why did the tomato turn red? Because it saw the salad dressing!'
Besides, in the case of one input key, you can input the string directly without specifying the input mapping.
# These two are equivalent
llm_chain.run({"adjective":"lame"})
llm_chain.run("lame")
# These two are also equivalent
llm_chain("lame")
llm_chain({"adjective":"lame"})
{'adjective': 'lame',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.
Add memory to chains#
Chain supports taking a BaseMemory object as its memory argument, allowing Chain object to persist data across multiple calls. In other words, it makes Chain a stateful object.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory()
)
conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")
# -> The first three colors of a rainbow are red, orange, and yellow.
conversation.run("And the next 4?")
# -> The next four colors of a rainbow are green, blue, indigo, and violet.
'The next four colors of a rainbow are green, blue, indigo, and violet.'
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-3
|
'The next four colors of a rainbow are green, blue, indigo, and violet.'
Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in Memory section.
Debug Chain#
It can be hard to debug Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting verbose to True will print out some internal states of the Chain object while it is being ran.
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory(),
verbose=True
)
conversation.run("What is ChatGPT?")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: What is ChatGPT?
AI:
> Finished chain.
'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'
Combine chains with the SequentialChain#
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-4
|
Combine chains with the SequentialChain#
The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.
In this tutorial, our sequential chain will:
First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name.
Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below.
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)
# Run the chain specifying only the input variable for the first chain.
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
Rainbow Socks Co.
"Step into Color with Rainbow Socks Co!"
> Finished chain.
"Step into Color with Rainbow Socks Co!"
Create a custom chain with the Chain class#
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-5
|
"Step into Color with Rainbow Socks Co!"
Create a custom chain with the Chain class#
LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.
In order to create a custom chain:
Start by subclassing the Chain class,
Fill out the input_keys and output_keys properties,
Add the _call method that shows how to execute the chain.
These steps are demonstrated in the example below:
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from typing import Dict, List
class ConcatenateChain(Chain):
chain_1: LLMChain
chain_2: LLMChain
@property
def input_keys(self) -> List[str]:
# Union of the input keys of the two chains.
all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
return list(all_input_vars)
@property
def output_keys(self) -> List[str]:
return ['concat_output']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
output_1 = self.chain_1.run(inputs)
output_2 = self.chain_2.run(inputs)
return {'concat_output': output_1 + output_2}
Now, we can try running the chain that we called.
prompt_1 = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain_1 = LLMChain(llm=llm, prompt=prompt_1)
prompt_2 = PromptTemplate(
input_variables=["product"],
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
504d9508b6bc-6
|
prompt_2 = PromptTemplate(
input_variables=["product"],
template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)
concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated output:\n{concat_output}")
Concatenated output:
Kaleidoscope Socks.
"Put Some Color in Your Step!"
That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains.
previous
Chains
next
How-To Guides
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 25, 2023.
|
https://python.langchain.com/en/latest/modules/chains/getting_started.html
|
444ee436a932-0
|
.ipynb
.pdf
Sequential Chains
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
Sequential Chains#
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains:
SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.
SimpleSequentialChain#
In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next.
Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play.
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# This is an LLMChain to write a synopsis given a title of a play.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
|
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.