id
stringlengths 14
16
| text
stringlengths 31
2.73k
| metadata
dict |
---|---|---|
251aeaa7e301-2 | PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'}
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."',
' None',
' None',
' None'], | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-3 | ' None',
' None',
' None'],
'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.
Return any relevant text in Italian.
{context}
Question: {question}
Relevant text, if any, in Italian:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
Respond in Italian.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER IN ITALIAN:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-4 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.",
' Non pertinente.',
' Non rilevante.',
" Non c'è testo pertinente."],
'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'}
Batch Size
When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:
llm = OpenAI(batch_size=5, temperature=0)
The refine Chain#
This sections shows results of using the refine Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-5 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"}
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs, "question": query}, return_only_outputs=True) | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-6 | chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.',
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31', | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-7 | '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33', | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-8 | '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'], | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-9 | 'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
refine_template = (
"The original question is as follows: {question}\n"
"We have provided an existing answer, including sources: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question (in Italian)" | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-10 | "answer the question (in Italian)"
"If you do update it, please update the sources as well. "
"If the context isn't useful, return the original answer."
)
refine_prompt = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=refine_template,
)
question_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question in Italian: {question}\n"
)
question_prompt = PromptTemplate(
input_variables=["context_str", "question"], template=question_template
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
{'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.', | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-11 | "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per", | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-12 | "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per", | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-13 | "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"], | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-14 | 'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"}
The map-rerank Chain#
This sections shows results of using the map-rerank Chain to do question answering with sources.
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result["output_text"]
' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'
result["intermediate_steps"]
[{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.',
'score': '100'}, | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-15 | 'score': '100'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'},
{'answer': ' This document does not answer the question', 'score': '0'}]
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
from langchain.output_parsers import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer In Italian: [answer here]
Score: [score between 0 and 100]
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer In Italian:"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT)
query = "What did the president say about Justice Breyer"
result = chain({"input_documents": docs, "question": query}, return_only_outputs=True)
result
{'source': 30, | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
251aeaa7e301-16 | result
{'source': 30,
'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.',
'score': '100'},
{'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.',
'score': '100'},
{'answer': ' Non so.', 'score': '0'},
{'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.',
'score': '100'}],
'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'}
previous
Hypothetical Document Embeddings
next
Question Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
The map-rerank Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html"
} |
219379a97df9-0 | .ipynb
.pdf
Hypothetical Document Embeddings
Contents
Multiple generations
Using our own prompts
Using HyDE
Hypothetical Document Embeddings#
This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper.
At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example.
In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own.
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import LLMChain, HypotheticalDocumentEmbedder
from langchain.prompts import PromptTemplate
base_embeddings = OpenAIEmbeddings()
llm = OpenAI()
# Load with `web_search` prompt
embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")
# Now we can use it as any embedding class!
result = embeddings.embed_query("Where is the Taj Mahal?")
Multiple generations#
We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things.
multi_llm = OpenAI(n=4, best_of=4)
embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search")
result = embeddings.embed_query("Where is the Taj Mahal?")
Using our own prompts# | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html"
} |
219379a97df9-1 | result = embeddings.embed_query("Where is the Taj Mahal?")
Using our own prompts#
Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.
In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example).
prompt_template = """Please answer the user's question about the most recent state of the union address
Question: {question}
Answer:"""
prompt = PromptTemplate(input_variables=["question"], template=prompt_template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings)
result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson")
Using HyDE#
Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example.
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html"
} |
219379a97df9-2 | Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Graph QA
next
Question Answering with Sources
Contents
Multiple generations
Using our own prompts
Using HyDE
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html"
} |
a2fa786efb1a-0 | .ipynb
.pdf
Graph QA
Contents
Create the graph
Querying the graph
Save the graph
Graph QA#
This notebook goes over how to do question answering over a graph data structure.
Create the graph#
In this section, we construct an example graph. At the moment, this works best for small pieces of text.
from langchain.indexes import GraphIndexCreator
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
index_creator = GraphIndexCreator(llm=OpenAI(temperature=0))
with open("../../state_of_the_union.txt") as f:
all_text = f.read()
We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.
text = "\n".join(all_text.split("\n\n")[105:108])
text
'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. '
graph = index_creator.from_text(text)
We can inspect the created graph.
graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
Querying the graph# | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html"
} |
a2fa786efb1a-1 | 'is the ground on which')]
Querying the graph#
We can now use the graph QA chain to ask question of the graph
from langchain.chains import GraphQAChain
chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)
chain.run("what is Intel going to build?")
> Entering new GraphQAChain chain...
Entities Extracted:
Intel
Full Context:
Intel is going to build $20 billion semiconductor "mega site"
Intel is building state-of-the-art factories
Intel is creating 10,000 new good-paying jobs
Intel is helping build Silicon Valley
> Finished chain.
' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'
Save the graph#
We can also save and load the graph.
graph.write_to_gml("graph.gml")
from langchain.indexes.graph import NetworkxEntityGraph
loaded_graph = NetworkxEntityGraph.from_gml("graph.gml")
loaded_graph.get_triples()
[('Intel', '$20 billion semiconductor "mega site"', 'is going to build'),
('Intel', 'state-of-the-art factories', 'is building'),
('Intel', '10,000 new good-paying jobs', 'is creating'),
('Intel', 'Silicon Valley', 'is helping build'),
('Field of dreams',
"America's future will be built",
'is the ground on which')]
previous
Chat Index
next
Hypothetical Document Embeddings
Contents
Create the graph
Querying the graph
Save the graph
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html"
} |
397273cebb25-0 | .ipynb
.pdf
Retrieval Question Answering with Sources
Contents
Chain Type
Retrieval Question Answering with Sources#
This notebook goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
Chain Type# | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html"
} |
397273cebb25-1 | 'sources': '31-pl'}
Chain Type#
You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook.
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
{'answer': ' The president said "Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."\n',
'sources': '31-pl'}
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the combine_documents_chain parameter. For example:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())
qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html"
} |
397273cebb25-2 | {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n',
'sources': '31-pl'}
previous
Retrieval Question/Answering
next
Vector DB Text Generation
Contents
Chain Type
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html"
} |
fc6632b2b0c9-0 | .ipynb
.pdf
Chat Index
Contents
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
Chat Index#
This notebook goes over how to set up a chain to chat with an index. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain.document_loaders import TextLoader
loader = TextLoader("../../state_of_the_union.txt")
documents = loader.load()
If you had multiple loaders that you wanted to combine, you do something like:
# loaders = [....]
# docs = []
# for loader in loaders:
# docs.extend(loader.load())
We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
We now initialize the ConversationalRetrievalChain | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-1 | We now initialize the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
' Justice Stephen Breyer'
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['source_documents'][0] | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-2 | result['source_documents'][0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)
ConversationalRetrievalChain with search_distance#
If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.
vectordbkwargs = {"search_distance": 0.9}
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})
ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-3 | from langchain.chains.question_answering import load_qa_chain
from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
ConversationalRetrievalChain with Question Answering with sources#
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson" | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-4 | chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt"
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example.
from langchain.chains.llm import LLMChain
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
# Construct a ChatVectorDBChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0)
streaming_llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)
chat_history = [] | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-5 | chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Justice Stephen Breyer
get_chat_history Function#
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore, get_chat_history=get_chat_history)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
previous
Analyze Document | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
fc6632b2b0c9-6 | previous
Analyze Document
next
Graph QA
Contents
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html"
} |
c3e672020f8a-0 | .ipynb
.pdf
Sequential Chains
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
Sequential Chains#
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains:
SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.
SimpleSequentialChain#
In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next.
Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play.
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# This is an LLMChain to write a synopsis given a title of a play.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-1 | synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)
review = overall_chain.run("Tragedy at sunset on the beach")
> Entering new SimpleSequentialChain chain...
Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future.
The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-2 | The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset.
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
> Finished chain.
print(review)
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-3 | The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
Sequential Chain#
Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.
Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
# This is an LLMChain to write a synopsis given a title of a play and the era it is set in.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.
Title: {title}
Era: {era}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis")
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis} | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-4 | Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SequentialChain
overall_chain = SequentialChain(
chains=[synopsis_chain, review_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["synopsis", "review"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England', | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-5 | 'era': 'Victorian England',
'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.", | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-6 | 'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}
Memory in Sequential Chains#
Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains.
For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:
from langchain.chains import SequentialChain
from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7) | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-7 | from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7)
template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.
Here is some context about the time and location of the play:
Date and Time: {time}
Location: {location}
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:
{review}
Social Media Post:
"""
prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)
social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")
overall_chain = SequentialChain(
memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}),
chains=[synopsis_chain, review_chain, social_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["social_post_text"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England',
'time': 'December 25th, 8pm PST',
'location': 'Theater in the Park', | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
c3e672020f8a-8 | 'location': 'Theater in the Park',
'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}
previous
LLM Chain
next
Serialization
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html"
} |
e6d504060000-0 | .ipynb
.pdf
LLM Chain
Contents
Single Input
Multiple Inputs
From string
LLM Chain#
This notebook showcases a simple LLM chain.
from langchain import PromptTemplate, OpenAI, LLMChain
Single Input#
First, lets go over an example using a single input
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
> Entering new LLMChain chain...
Prompt after formatting:
Question: What NFL team won the Super Bowl in the year Justin Beiber was born?
Answer: Let's think step by step.
> Finished LLMChain chain.
' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.'
Multiple Inputs#
Now lets go over an example using multiple inputs.
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
> Entering new LLMChain chain...
Prompt after formatting:
Write a sad poem about ducks.
> Finished LLMChain chain. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html"
} |
e6d504060000-1 | Prompt after formatting:
Write a sad poem about ducks.
> Finished LLMChain chain.
"\n\nThe ducks swim in the pond,\nTheir feathers so soft and warm,\nBut they can't help but feel so forlorn.\n\nTheir quacks echo in the air,\nBut no one is there to hear,\nFor they have no one to share.\n\nThe ducks paddle around in circles,\nTheir heads hung low in despair,\nFor they have no one to care.\n\nThe ducks look up to the sky,\nBut no one is there to see,\nFor they have no one to be.\n\nThe ducks drift away in the night,\nTheir hearts filled with sorrow and pain,\nFor they have no one to gain."
From string#
You can also construct an LLMChain from a string template directly.
template = """Write a {adjective} poem about {subject}."""
llm_chain = LLMChain.from_string(llm=OpenAI(temperature=0), template=template)
llm_chain.predict(adjective="sad", subject="ducks")
"\n\nThe ducks swim in the pond,\nTheir feathers so soft and warm,\nBut they can't help but feel so forlorn.\n\nTheir quacks echo in the air,\nBut no one is there to hear,\nFor they have no one to share.\n\nThe ducks paddle around in circles,\nTheir heads hung low in despair,\nFor they have no one to care.\n\nThe ducks look up to the sky,\nBut no one is there to see,\nFor they have no one to be.\n\nThe ducks drift away in the night,\nTheir hearts filled with sorrow and pain,\nFor they have no one to gain."
previous
Loading from LangChainHub
next
Sequential Chains
Contents
Single Input | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html"
} |
e6d504060000-2 | previous
Loading from LangChainHub
next
Sequential Chains
Contents
Single Input
Multiple Inputs
From string
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html"
} |
8c9154d04727-0 | .ipynb
.pdf
Loading from LangChainHub
Loading from LangChainHub#
This notebook covers how to load chains from LangChainHub.
from langchain.chains import load_chain
chain = load_chain("lc://chains/llm-math/chain.json")
chain.run("whats 2 raised to .12")
> Entering new LLMMathChain chain...
whats 2 raised to .12
Answer: 1.0791812460476249
> Finished chain.
'Answer: 1.0791812460476249'
Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain import OpenAI, VectorDBQA
from langchain.document_loaders import TextLoader
loader = TextLoader('../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore)
query = "What did the president say about Ketanji Brown Jackson"
chain.run(query) | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html"
} |
8c9154d04727-1 | query = "What did the president say about Ketanji Brown Jackson"
chain.run(query)
" The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence."
previous
Async API for Chain
next
LLM Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html"
} |
d11e9f76f83d-0 | .ipynb
.pdf
Async API for Chain
Async API for Chain#
LangChain provides async support for Chains by leveraging the asyncio library.
Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap.
import asyncio
import time
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
def generate_serially():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
for _ in range(5):
resp = chain.run(product="toothpaste")
print(resp)
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(5)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html"
} |
d11e9f76f83d-1 | await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
BrightSmile Toothpaste Company
BrightSmile Toothpaste Co.
BrightSmile Toothpaste
Gleaming Smile Inc.
SparkleSmile Toothpaste
Concurrent executed in 1.54 seconds.
BrightSmile Toothpaste Co.
MintyFresh Toothpaste Co.
SparkleSmile Toothpaste.
Pearly Whites Toothpaste Co.
BrightSmile Toothpaste.
Serial executed in 6.38 seconds.
previous
How-To Guides
next
Loading from LangChainHub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html"
} |
1c4a6c31397a-0 | .ipynb
.pdf
Serialization
Contents
Saving a chain to disk
Loading a chain from disk
Saving components separately
Serialization#
This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.
Saving a chain to disk#
First, let’s go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension.
from langchain import PromptTemplate, OpenAI, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)
llm_chain.save("llm_chain.json")
Let’s now take a look at what’s inside this saved file
!cat llm_chain.json
{
"memory": null,
"verbose": true,
"prompt": {
"input_variables": [
"question"
],
"output_parser": null,
"template": "Question: {question}\n\nAnswer: Let's think step by step.",
"template_format": "f-string"
},
"llm": {
"model_name": "text-davinci-003",
"temperature": 0.0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"best_of": 1,
"request_timeout": null, | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/serialization.html"
} |
1c4a6c31397a-1 | "best_of": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
},
"output_key": "text",
"_type": "llm_chain"
}
Loading a chain from disk#
We can load a chain from disk by using the load_chain method.
from langchain.chains import load_chain
chain = load_chain("llm_chain.json")
chain.run("whats 2 + 2")
> Entering new LLMChain chain...
Prompt after formatting:
Question: whats 2 + 2
Answer: Let's think step by step.
> Finished chain.
' 2 + 2 = 4'
Saving components separately#
In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component.
llm_chain.prompt.save("prompt.json")
!cat prompt.json
{
"input_variables": [
"question"
],
"output_parser": null,
"template": "Question: {question}\n\nAnswer: Let's think step by step.",
"template_format": "f-string"
}
llm_chain.llm.save("llm.json")
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0, | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/serialization.html"
} |
1c4a6c31397a-2 | "top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
}
config = {
"memory": None,
"verbose": True,
"prompt_path": "prompt.json",
"llm_path": "llm.json",
"output_key": "text",
"_type": "llm_chain"
}
import json
with open("llm_chain_separate.json", "w") as f:
json.dump(config, f, indent=2)
!cat llm_chain_separate.json
{
"memory": null,
"verbose": true,
"prompt_path": "prompt.json",
"llm_path": "llm.json",
"output_key": "text",
"_type": "llm_chain"
}
We can then load it in the same way
chain = load_chain("llm_chain_separate.json")
chain.run("whats 2 + 2")
> Entering new LLMChain chain...
Prompt after formatting:
Question: whats 2 + 2
Answer: Let's think step by step.
> Finished chain.
' 2 + 2 = 4'
previous
Sequential Chains
next
Transformation Chain
Contents
Saving a chain to disk
Loading a chain from disk
Saving components separately
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/serialization.html"
} |
08e56dd381a4-0 | .ipynb
.pdf
Transformation Chain
Transformation Chain#
This notebook showcases using a generic transformation chain.
As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those.
from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
def transform_func(inputs: dict) -> dict:
text = inputs["text"]
shortened_text = "\n\n".join(text.split("\n\n")[:3])
return {"output_text": shortened_text}
transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func)
template = """Summarize this text:
{output_text}
Summary:"""
prompt = PromptTemplate(input_variables=["output_text"], template=template)
llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
sequential_chain.run(state_of_the_union)
' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'
previous
Serialization
next
Analyze Document
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/chains/generic/transformation.html"
} |
049dcc03841b-0 | .ipynb
.pdf
Getting Started
Contents
One Line Index Creation
Walkthrough
Getting Started#
LangChain primary focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
It’s that simple! The get_relevant_documents method can be implemented however you see fit.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.
In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that.
By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb.
pip install chromadb
This example showcases question answering over documents.
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.
Question answering over documents consists of four steps:
Create an index
Create a Retriever from that index
Create a question answering chain
Ask questions! | {
"url": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"
} |
049dcc03841b-1 | Create a Retriever from that index
Create a question answering chain
Ask questions!
Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
First, let’s import some common classes we’ll use no matter what.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
One Line Index Creation#
To get started as quickly as possible, we can use the VectorstoreIndexCreator.
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"
} |
049dcc03841b-2 | index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.
index.vectorstore
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
If we then want to access the VectorstoreRetriever, we can do that with:
index.vectorstore.as_retriever()
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough#
Okay, so what’s actually going on? How is this index getting created?
A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?
There are three main steps going on after the documents are loaded:
Splitting documents into chunks
Creating embeddings for each document
Storing documents and embeddings in a vectorstore
Let’s walk through this in code
documents = loader.load()
Next, we will split the documents into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
We will then select which embeddings we want to use. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"
} |
049dcc03841b-3 | We will then select which embeddings we want to use.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
We now create the vectorstore to use as the index.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
So that’s creating the index. Then, we expose this index in a retriever interface.
retriever = db.as_retriever()
Then, as before, we create a chain and use it to answer questions!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"
} |
049dcc03841b-4 | )
Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood.
previous
Indexes
next
Document Loaders
Contents
One Line Index Creation
Walkthrough
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/getting_started.html"
} |
47adb2df1d85-0 | .rst
.pdf
Text Splitters
Text Splitters#
Note
Conceptual Guide
When you want to deal with long pieces of text, it is necessary to split up that text into chunks.
As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text.
This notebook showcases several ways to do that.
At a high level, text splitters work as following:
Split the text up into small, semantically meaningful chunks (often sentences).
Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there two different axes along which you can customize your text splitter:
How the text is split
How the chunk size is measured
For an introduction to the default text splitter and generic functionality see:
Getting Started
We also have documentation for all the types of text splitters that are supported.
Please see below for that list.
Character Text Splitter
Hugging Face Length Function
Latex Text Splitter
Markdown Text Splitter
NLTK Text Splitter
Python Code Text Splitter
RecursiveCharacterTextSplitter
Spacy Text Splitter
tiktoken (OpenAI) Length Function
TiktokenText Splitter
previous
YouTube
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/text_splitters.html"
} |
e1f98dee159b-0 | .rst
.pdf
Retrievers
Retrievers#
Note
Conceptual Guide
The retriever interface is a generic interface that makes it easy to combine documents with
language models. This interface exposes a get_relevant_documents method which takes in a query
(a string) and returns a list of documents.
Please see below for a list of all the retrievers supported.
ChatGPT Plugin Retriever
ElasticSearch BM25
Metal
Pinecone Hybrid Search
TF-IDF Retriever
VectorStore Retriever
Weaviate Hybrid Search
previous
Zilliz
next
ChatGPT Plugin Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/retrievers.html"
} |
58af2813f293-0 | .rst
.pdf
Document Loaders
Document Loaders#
Note
Conceptual Guide
Combining language models with your own text data is a powerful way to differentiate them.
The first step in doing this is to load the data into “documents” - a fancy way of say some pieces of text.
This module is aimed at making this easy.
A primary driver of a lot of this is the Unstructured python package.
This package is a great way to transform all types of files - text, powerpoint, images, html, pdf, etc - into text data.
For detailed instructions on how to get set up with Unstructured, see installation guidelines here.
The following document loaders are provided:
CoNLL-U
Airbyte JSON
Apify Dataset
AZLyrics
Azure Blob Storage Container
Azure Blob Storage File
BigQuery Loader
Blackboard
College Confidential
Copy Paste
CSV Loader
Specify a column to be used identify the document source
DataFrame Loader
Directory Loader
DuckDB Loader
Email
EPubs
EverNote
Facebook Chat
Figma
GCS Directory
GCS File Storage
GitBook
Google Drive
Gutenberg
Hacker News
HTML
iFixit
Images
IMSDb
Markdown
Notebook
Notion
Notion DB Loader
Obsidian
PDF
PowerPoint
ReadTheDocs Documentation
Roam
s3 Directory
s3 File
Sitemap Loader
Subtitle Files
Telegram
Unstructured File Loader
URL
Selenium URL Loader
Web Base
Loading multiple webpages
WhatsApp Chat
Word Documents
YouTube
previous
Getting Started
next
CoNLL-U
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders.html"
} |
c51902a69072-0 | .rst
.pdf
Vectorstores
Vectorstores#
Note
Conceptual Guide
Vectorstores are one of the most important components of building indexes.
For an introduction to vectorstores and generic functionality see:
Getting Started
We also have documentation for all the types of vectorstores that are supported.
Please see below for that list.
AtlasDB
Chroma
Deep Lake
ElasticSearch
FAISS
Milvus
OpenSearch
PGVector
Pinecone
Qdrant
Redis
Weaviate
Zilliz
previous
TiktokenText Splitter
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores.html"
} |
3c2142266bd7-0 | .ipynb
.pdf
Getting Started
Contents
Add texts
From Documents
Getting Started#
This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this.
This covers generic high level functionality related to all vector stores. For guides on specific vectorstores, please see the how-to guides here
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"
} |
3c2142266bd7-1 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Add texts#
You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream).
docsearch.add_texts(["Ankush went to Princeton"])
['a05e3d0c-ab40-11ed-a853-e65801318981']
query = "Where did Ankush go to college?"
docs = docsearch.similarity_search(query)
docs[0]
Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0)
From Documents#
We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata).
documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}])
docsearch = Chroma.from_documents(documents, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"
} |
3c2142266bd7-2 | print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Vectorstores
next
AtlasDB
Contents
Add texts
From Documents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html"
} |
1481b38fedc2-0 | .ipynb
.pdf
FAISS
Contents
Similarity Search with score
Saving and loading
Merging
FAISS#
This notebook shows how to use functionality related to the FAISS vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity Search with score# | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"
} |
1481b38fedc2-1 | Similarity Search with score#
There are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the similarity score of the query to them.
docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0]
(Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
0.3914415)
It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.
embedding_vector = embeddings.embed_query(query)
docs_and_scores = db.similarity_search_by_vector(embedding_vector)
Saving and loading#
You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"
} |
1481b38fedc2-2 | db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
docs = new_db.similarity_search(query)
docs[0]
Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)
Merging#
You can also merge two FAISS vectorstores
db1 = FAISS.from_texts(["foo"], embeddings)
db2 = FAISS.from_texts(["bar"], embeddings)
db1.docstore._dict
{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0)}
db2.docstore._dict | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"
} |
1481b38fedc2-3 | db2.docstore._dict
{'bdc50ae3-a1bb-4678-9260-1b0979578f40': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}
db1.merge_from(db2)
db1.docstore._dict
{'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0),
'd5211050-c777-493d-8825-4800e74cfdb6': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)}
previous
ElasticSearch
next
Milvus
Contents
Similarity Search with score
Saving and loading
Merging
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html"
} |
d4d9c54336b4-0 | .ipynb
.pdf
Milvus
Milvus#
This notebook shows how to use functionality related to the Milvus vector database.
To run, you should have a Milvus instance up and running: https://milvus.io/docs/install_standalone-docker.md
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
docs = vector_db.similarity_search(query)
docs[0]
previous
FAISS
next
OpenSearch
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html"
} |
38e38741bd94-0 | .ipynb
.pdf
Qdrant
Contents
Connecting to Qdrant from LangChain
Local mode
In-memory
On-disk storage
On-premise server deployment
Qdrant Cloud
Reusing the same collection
Similarity search
Similarity search with score
Maximum marginal relevance search (MMR)
Qdrant as a Retriever
Customizing Qdrant
Qdrant#
This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:
Local mode, no server required
On-premise server deployment
Qdrant Cloud
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connecting to Qdrant from LangChain#
Local mode#
Python client allows you to run the same code in local mode without running the Qdrant server. That’s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.
In-memory#
For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.
qdrant = Qdrant.from_documents(
docs, embeddings, | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-1 | qdrant = Qdrant.from_documents(
docs, embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
On-disk storage#
Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.
qdrant = Qdrant.from_documents(
docs, embeddings,
path="/tmp/local_qdrant",
collection_name="my_documents",
)
On-premise server deployment#
No matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service.
url = "<---qdrant url here --->"
qdrant = Qdrant.from_documents(
docs, embeddings,
url, prefer_grpc=True,
collection_name="my_documents",
)
Qdrant Cloud#
If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you’ll need to provide an API key to secure your deployment from being accessed publicly.
url = "<---qdrant cloud cluster url here --->"
api_key = "<---api key here--->"
qdrant = Qdrant.from_documents(
docs, embeddings,
url, prefer_grpc=True, api_key=api_key,
collection_name="my_documents",
)
Reusing the same collection# | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-2 | collection_name="my_documents",
)
Reusing the same collection#
Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with LangChain, but they are going to destroy the collection and create it from scratch! If you want to reuse the existing collection, you can always create an instance of Qdrant on your own and pass the QdrantClient instance with the connection details.
del qdrant
import qdrant_client
client = qdrant_client.QdrantClient(
path="/tmp/local_qdrant", prefer_grpc=True
)
qdrant = Qdrant(
client=client, collection_name="my_documents",
embedding_function=embeddings.embed_query
)
Similarity search#
The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
print(found_docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-3 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search_with_score(query)
document, score = found_docs[0]
print(document.page_content)
print(f"\nScore: {score}")
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Score: 0.8153784913324512
Maximum marginal relevance search (MMR)#
If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
query = "What did the president say about Ketanji Brown Jackson" | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-4 | query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)
for i, doc in enumerate(found_docs):
print(f"{i + 1}.", doc.page_content, "\n")
1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together.
I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-5 | I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
Qdrant as a Retriever#
Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = qdrant.as_retriever()
retriever
VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={})
It might be also specified to use MMR as a search strategy, instead of similarity.
retriever = qdrant.as_retriever(search_type="mmr")
retriever
VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0] | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-6 | retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Customizing Qdrant#
Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.
By default, your document is going to be stored in the following payload structure:
{
"page_content": "Lorem ipsum dolor sit amet",
"metadata": {
"foo": "bar"
}
}
You can, however, decide to use different keys for the page content and metadata. That’s useful if you already have a collection that you’d like to reuse. You can always change the
Qdrant.from_documents(
docs, embeddings,
location=":memory:",
collection_name="my_documents_2",
content_payload_key="my_page_content_key",
metadata_payload_key="my_meta",
) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
38e38741bd94-7 | metadata_payload_key="my_meta",
)
<langchain.vectorstores.qdrant.Qdrant at 0x7fc4e2baa230>
previous
Pinecone
next
Redis
Contents
Connecting to Qdrant from LangChain
Local mode
In-memory
On-disk storage
On-premise server deployment
Qdrant Cloud
Reusing the same collection
Similarity search
Similarity search with score
Maximum marginal relevance search (MMR)
Qdrant as a Retriever
Customizing Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html"
} |
c78aa5763de3-0 | .ipynb
.pdf
Weaviate
Weaviate#
This notebook shows how to use functionality related to the Weaviate vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import weaviate
import os
WEAVIATE_URL = ""
client = weaviate.Client(
url=WEAVIATE_URL,
additional_headers={
'X-OpenAI-Api-Key': os.environ["OPENAI_API_KEY"]
}
)
client.schema.delete_all()
client.schema.get()
schema = {
"classes": [
{
"class": "Paragraph",
"description": "A written paragraph",
"vectorizer": "text2vec-openai",
"moduleConfig": {
"text2vec-openai": {
"model": "babbage",
"type": "text"
}
},
"properties": [
{
"dataType": ["text"],
"description": "The content of the paragraph",
"moduleConfig": {
"text2vec-openai": {
"skip": False,
"vectorizePropertyName": False
}
},
"name": "content",
},
],
},
]
}
client.schema.create(schema) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"
} |
c78aa5763de3-1 | },
],
},
]
}
client.schema.create(schema)
vectorstore = Weaviate(client, "Paragraph", "content")
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].page_content)
previous
Redis
next
Zilliz
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html"
} |
d34a79ada862-0 | .ipynb
.pdf
AtlasDB
AtlasDB#
This notebook shows you how to use functionality related to the AtlasDB
import time
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import SpacyTextSplitter
from langchain.vectorstores import AtlasDB
from langchain.document_loaders import TextLoader
!python -m spacy download en_core_web_sm
ATLAS_TEST_API_KEY = '7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6'
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = SpacyTextSplitter(separator='|')
texts = []
for doc in text_splitter.split_documents(documents):
texts.extend(doc.page_content.split('|'))
texts = [e.strip() for e in texts]
db = AtlasDB.from_texts(texts=texts,
name='test_index_'+str(time.time()), # unique name for your vector store
description='test_index', #a description for your vector store
api_key=ATLAS_TEST_API_KEY,
index_kwargs={'build_topic_model': True})
db.project.wait_for_project_lock()
db.project
test_index_1677255228.136989
A description for your project 508 datums inserted.
1 index built.
Projections
test_index_1677255228.136989_index. Status Completed. view online
Projection ID: db996d77-8981-48a0-897a-ff2c22bbf541
Hide embedded project
Explore on atlas.nomic.ai
previous
Getting Started
next
Chroma
By Harrison Chase
© Copyright 2023, Harrison Chase. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html"
} |
d34a79ada862-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/atlas.html"
} |
407e375d8839-0 | .ipynb
.pdf
Chroma
Contents
Similarity search with score
Persistance
Initialize PeristedChromaDB
Persist the Database
Load the Database from disk, and create the chain
Retriever options
MMR
Chroma#
This notebook shows how to use functionality related to the Chroma vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
Using embedded DuckDB without persistence: data will be transient
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"
} |
407e375d8839-1 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
0.3913410007953644)
Persistance#
The below steps cover how to persist a ChromaDB instance
Initialize PeristedChromaDB#
Create embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it’s persisted.
# Embed and store the texts
# Supplying a persist_directory will store the embeddings on disk
persist_directory = 'db' | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"
} |
407e375d8839-2 | # Supplying a persist_directory will store the embeddings on disk
persist_directory = 'db'
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=docs, embedding=embedding, persist_directory=persist_directory)
Running Chroma using direct local API.
No existing DB found in db, skipping load
No existing DB found in db, skipping load
Persist the Database#
We should call persist() to ensure the embeddings are written to disk.
vectordb.persist()
vectordb = None
Persisting DB to disk, putting it in the save folder db
PersistentDuckDB del, about to run persist
Persisting DB to disk, putting it in the save folder db
Load the Database from disk, and create the chain#
Be sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering.
# Now we can load the persisted database from disk, and use it as normal.
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
Running Chroma using direct local API.
loaded in 4 embeddings
loaded in 1 collections
Retriever options#
This section goes over different options for how to use Chroma as a retriever.
MMR#
In addition to using similarity search in the retriever object, you can also use mmr.
retriever = db.as_retriever(search_type="mmr")
retriever.get_relevant_documents(query)[0] | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"
} |
407e375d8839-3 | retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
previous
AtlasDB
next
Deep Lake
Contents
Similarity search with score
Persistance
Initialize PeristedChromaDB
Persist the Database
Load the Database from disk, and create the chain
Retriever options
MMR
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html"
} |
85d1f4d9ad9b-0 | .ipynb
.pdf
PGVector
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
PGVector#
This notebook shows how to use functionality related to the Postgres vector database (PGVector).
## Loading Environment Variables
from typing import List, Tuple
from dotenv import load_dotenv
load_dotenv()
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector
from langchain.document_loaders import TextLoader
from langchain.docstore.document import Document
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
## PGVector needs the connection string to the database.
## We will load it from the environment variables.
import os
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("PGVECTOR_PORT", "5432")),
database=os.environ.get("PGVECTOR_DATABASE", "postgres"),
user=os.environ.get("PGVECTOR_USER", "postgres"),
password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),
)
## Example
# postgresql+psycopg2://username:password@localhost:5432/database_name
Similarity search with score#
Similarity Search with Euclidean Distance (Default)#
# The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the
# permission to create a table. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"
} |
85d1f4d9ad9b-1 | # permission to create a table.
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="state_of_the_union",
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
--------------------------------------------------------------------------------
Score: 0.6076628081132506
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076628081132506
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"
} |
85d1f4d9ad9b-2 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804780049968
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804780049968
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"
} |
85d1f4d9ad9b-3 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
previous
OpenSearch
next
Pinecone
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html"
} |
69caf6cd8b16-0 | .ipynb
.pdf
Redis
Contents
RedisVectorStoreRetriever
Redis#
This notebook shows how to use functionality related to the Redis vector database.
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.redis import Redis
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link')
rds.index_name
'link'
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
print(rds.add_texts(["Ankush went to Princeton"])) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"
} |
69caf6cd8b16-1 | print(rds.add_texts(["Ankush went to Princeton"]))
['doc:link:d7d02e3faf1b40bbbe29a683ff75b280']
query = "Princeton"
results = rds.similarity_search(query)
print(results[0].page_content)
Ankush went to Princeton
# Load from existing index
rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link')
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
RedisVectorStoreRetriever#
Here we go over different options for using the vector store as a retriever.
There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.
retriever = rds.as_retriever()
docs = retriever.get_relevant_documents(query) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"
} |
69caf6cd8b16-2 | docs = retriever.get_relevant_documents(query)
We can also use similarity_limit as a search method. This is only return documents if they are similar enough
retriever = rds.as_retriever(search_type="similarity_limit")
# Here we can see it doesn't return any results because there are no relevant documents
retriever.get_relevant_documents("where did ankush go to college?")
previous
Qdrant
next
Weaviate
Contents
RedisVectorStoreRetriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html"
} |
ad59e55bcfd8-0 | .ipynb
.pdf
Deep Lake
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local
Deep Lake#
This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks.
For more information, please see the Deep Lake documentation or api reference
!python3 -m pip install openai deeplake
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DeepLake
from langchain.document_loaders import TextLoader
import os
os.environ['OPENAI_API_KEY'] = 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DeepLake.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Retrieval Question/Answering#
from langchain.chains import RetrievalQA
from langchain.llms import OpenAIChat
qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever()) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"
} |
ad59e55bcfd8-1 | query = 'What did the president say about Ketanji Brown Jackson'
qa.run(query)
Attribute based filtering in metadata#
import random
for d in docs:
d.metadata['year'] = random.randint(2012, 2014)
db = DeepLake.from_documents(docs, embeddings)
db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})
Choosing distance function#
Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product
db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos')
Maximal Marginal relevance#
Using maximal marginal relevance
db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?')
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local#
By default deep lake datasets are stored in memory, in case you want to persist locally or to any object storage you can simply provide path to the dataset. You can retrieve token from app.activeloop.ai
!activeloop login -t <token>
# Embed and store the texts
dataset_path = "hub://{username}/{dataset_name}" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
vectordb = DeepLake.from_documents(documents=docs, embedding=embedding, dataset_path=dataset_path)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
vectordb.ds.summary()
embeddings = vectordb.ds.embedding.numpy()
previous
Chroma
next | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"
} |
ad59e55bcfd8-2 | embeddings = vectordb.ds.embedding.numpy()
previous
Chroma
next
ElasticSearch
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html"
} |
f48103fc733f-0 | .ipynb
.pdf
Zilliz
Zilliz#
This notebook shows how to use functionality related to the Zilliz Cloud managed vector database.
To run, you should have a Zilliz Cloud instance up and running: https://zilliz.com/cloud
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
# replace
ZILLIZ_CLOUD_HOSTNAME = "" # example: "in01-17f69c292d4a50a.aws-us-west-2.vectordb.zillizcloud.com"
ZILLIZ_CLOUD_PORT = "" #example: "19532"
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": ZILLIZ_CLOUD_HOSTNAME, "port": ZILLIZ_CLOUD_PORT},
)
docs = vector_db.similarity_search(query)
docs[0]
previous
Weaviate
next
Retrievers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html"
} |
5a903e1b78c2-0 | .ipynb
.pdf
OpenSearch
Contents
similarity_search using Approximate k-NN Search with Custom Parameters
similarity_search using Script Scoring with Custom Parameters
similarity_search using Painless Scripting with Custom Parameters
Using a preexisting OpenSearch instance
OpenSearch#
This notebook shows how to use functionality related to the OpenSearch database.
To run, you should have the opensearch instance up and running: here
similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.
Check this for more details.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = OpenSearchVectorSearch.from_texts(texts, embeddings, opensearch_url="http://localhost:9200")
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Approximate k-NN Search with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_texts(texts, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48) | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"
} |
5a903e1b78c2-1 | query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Script Scoring with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_texts(texts, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring")
print(docs[0].page_content)
similarity_search using Painless Scripting with Custom Parameters#
docsearch = OpenSearchVectorSearch.from_texts(texts, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosinesimil", pre_filter=filter)
print(docs[0].page_content)
Using a preexisting OpenSearch instance#
It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present.
# this is just an example, you would need to change these values to point to another opensearch instance
docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200")
# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"
} |
5a903e1b78c2-2 | docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata")
previous
Milvus
next
PGVector
Contents
similarity_search using Approximate k-NN Search with Custom Parameters
similarity_search using Script Scoring with Custom Parameters
similarity_search using Painless Scripting with Custom Parameters
Using a preexisting OpenSearch instance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html"
} |
91ec25b6137d-0 | .ipynb
.pdf
Pinecone
Pinecone#
This notebook shows how to use functionality related to the Pinecone vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import pinecone
# initialize pinecone
pinecone.init(
api_key="YOUR_API_KEY", # find at app.pinecone.io
environment="YOUR_ENV" # next to api key in console
)
index_name = "langchain-demo"
docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
previous
PGVector
next
Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html"
} |
04eb7f65a4f9-0 | .ipynb
.pdf
ElasticSearch
ElasticSearch#
This notebook shows how to use functionality related to the ElasticSearch database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200"
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"
} |
04eb7f65a4f9-1 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Deep Lake
next
FAISS
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 08, 2023. | {
"url": "https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.