id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
117
a24347f528db-7
get_chat_history Function# You can also specify a get_chat_history function, which can be used to format the chat_history string. def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res) qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), get_chat_history=get_chat_history) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." previous Analyze Document next Graph QA Contents Pass in chat history Using a different model for condensing the question Return Source Documents ConversationalRetrievalChain with search_distance ConversationalRetrievalChain with map_reduce ConversationalRetrievalChain with Question Answering with sources ConversationalRetrievalChain with streaming to stdout get_chat_history Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
e3500f3c9d0d-0
.ipynb .pdf Hypothetical Document Embeddings Contents Multiple generations Using our own prompts Using HyDE Hypothetical Document Embeddings# This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper. At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own. from langchain.llms import OpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.chains import LLMChain, HypotheticalDocumentEmbedder from langchain.prompts import PromptTemplate base_embeddings = OpenAIEmbeddings() llm = OpenAI() # Load with `web_search` prompt embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search") # Now we can use it as any embedding class! result = embeddings.embed_query("Where is the Taj Mahal?") Multiple generations# We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things. multi_llm = OpenAI(n=4, best_of=4) embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search") result = embeddings.embed_query("Where is the Taj Mahal?") Using our own prompts#
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
e3500f3c9d0d-1
Using our own prompts# Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that. In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example). prompt_template = """Please answer the user's question about the most recent state of the union address Question: {question} Answer:""" prompt = PromptTemplate(input_variables=["question"], template=prompt_template) llm_chain = LLMChain(llm=llm, prompt=prompt) embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings) result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson") Using HyDE# Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example. from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content)
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
e3500f3c9d0d-2
print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Graph QA next Question Answering with Sources Contents Multiple generations Using our own prompts Using HyDE By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
cf7c6f791025-0
.ipynb .pdf Question Answering with Sources Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering with Sources# This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.qa_with_sources import load_qa_with_sources_chain
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-1
from langchain.chains.qa_with_sources import load_qa_with_sources_chain from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-2
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"]) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' None', ' None', ' None'],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-3
' None', ' None', ' None'], 'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text in Italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", ' Non pertinente.', ' Non rilevante.', " Non c'è testo pertinente."], 'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-6
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.', '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-7
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-8
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-9
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer, including sources: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question (in Italian)"
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-10
"answer the question (in Italian)" "If you do update it, please update the sources as well. " "If the context isn't useful, return the original answer." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_template, ) question_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question in Italian: {question}\n" ) question_prompt = PromptTemplate( input_variables=["context_str", "question"], template=question_template ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-11
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-12
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-13
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-14
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.' result["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'},
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-15
'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result {'source': 30,
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
cf7c6f791025-16
result {'source': 30, 'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.', 'score': '100'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'} previous Hypothetical Document Embeddings next Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
bcec9d79ee93-0
.ipynb .pdf Analyze Document Contents Summarize Question Answering Analyze Document# The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain. with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() Summarize# Let’s take a look at it in action below, using it summarize a long document. from langchain import OpenAI from langchain.chains.summarize import load_summarize_chain llm = OpenAI(temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") from langchain.chains import AnalyzeDocumentChain summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain) summarize_document_chain.run(state_of_the_union) " In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America." Question Answering# Let’s take a look at this using a question answering chain. from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(llm, chain_type="map_reduce")
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
bcec9d79ee93-1
qa_chain = load_qa_chain(llm, chain_type="map_reduce") qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain) qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.' previous Transformation Chain next Chat Over Documents with Chat History Contents Summarize Question Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
c709884f50ec-0
.ipynb .pdf Retrieval Question/Answering Contents Chain Type Custom Prompts Return Source Documents Retrieval Question/Answering# This example showcases question answering over an index. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import RetrievalQA from langchain.document_loaders import TextLoader loader = TextLoader("../../state_of_the_union.txt") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Chain Type# You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-1
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example: from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-2
qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Custom Prompts# You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chain from langchain.prompts import PromptTemplate prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain_type_kwargs = {"prompt": PROMPT} qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani." Return Source Documents# Additionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-3
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True) query = "What did the president say about Ketanji Brown Jackson" result = qa({"query": query}) result["result"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." result["source_documents"] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-4
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-5
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
c709884f50ec-6
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] previous Summarization next Retrieval Question Answering with Sources Contents Chain Type Custom Prompts Return Source Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
a4bd8a7764e1-0
.ipynb .pdf Retrieval Question Answering with Sources Contents Chain Type Retrieval Question Answering with Sources# This notebook goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. from langchain.chains import RetrievalQAWithSourcesChain from langchain import OpenAI chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'} Chain Type#
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
a4bd8a7764e1-1
'sources': '31-pl'} Chain Type# You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook. There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", retriever=docsearch.as_retriever()) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president said "Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."\n', 'sources': '31-pl'} The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the combine_documents_chain parameter. For example: from langchain.chains.qa_with_sources import load_qa_with_sources_chain qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever()) qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
a4bd8a7764e1-2
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'} previous Retrieval Question/Answering next Vector DB Text Generation Contents Chain Type By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
f125eadb6076-0
.ipynb .pdf Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering# This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: stuff, map_reduce, refine, map_rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate from langchain.indexes.vectorstore import VectorstoreIndexCreator with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.get_relevant_documents(query) from langchain.chains.question_answering import load_qa_chain from langchain.llms import OpenAI Quickstart#
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-1
from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.' If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-2
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'],
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-3
' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text translated into italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian. If you don't know the answer, just say that you don't know. Don't try to make up an answer. QUESTION: {question} ========= {summaries} ========= Answer in Italian:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.',
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-6
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template, ) initial_qa_template = (
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-7
template=refine_prompt_template, ) initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n" ) initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.",
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-8
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"],
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-9
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True) query = "What did the president say about Justice Breyer" results = chain({"input_documents": docs, "question": query}, return_only_outputs=True) results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.' results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'},
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-10
{'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'},
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
f125eadb6076-11
'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'} previous Question Answering with Sources next Summarization Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
8045a3b8d3d9-0
.ipynb .pdf Graph QA Contents Create the graph Querying the graph Save the graph Graph QA# This notebook goes over how to do question answering over a graph data structure. Create the graph# In this section, we construct an example graph. At the moment, this works best for small pieces of text. from langchain.indexes import GraphIndexCreator from langchain.llms import OpenAI from langchain.document_loaders import TextLoader index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) with open("../../state_of_the_union.txt") as f: all_text = f.read() We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. text = "\n".join(all_text.split("\n\n")[105:108]) text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' graph = index_creator.from_text(text) We can inspect the created graph. graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] Querying the graph#
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
8045a3b8d3d9-1
'is the ground on which')] Querying the graph# We can now use the graph QA chain to ask question of the graph from langchain.chains import GraphQAChain chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' Save the graph# We can also save and load the graph. graph.write_to_gml("graph.gml") from langchain.indexes.graph import NetworkxEntityGraph loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] previous Chat Over Documents with Chat History next Hypothetical Document Embeddings Contents Create the graph Querying the graph Save the graph By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
8045a3b8d3d9-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
592858c395a1-0
.ipynb .pdf Vector DB Text Generation Contents Prepare Data Set Up Vector DB Set Up LLM Chain with Custom Prompt Generate Text Vector DB Text Generation# This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. Prepare Data# First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents. from langchain.llms import OpenAI from langchain.docstore.document import Document import requests from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.prompts import PromptTemplate import pathlib import subprocess import tempfile def get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-1
relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url}) sources = get_github_docs("yirenlu92", "deno-manual-forked") source_chunks = [] splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0) for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'... Set Up Vector DB# Now that we have the documentation content in chunks, let’s put all this information in a vector index for easy retrieval. search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings()) Set Up LLM Chain with Custom Prompt# Next, let’s set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user. from langchain.chains import LLMChain prompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "topic"] ) llm = OpenAI(temperature=0) chain = LLMChain(llm=llm, prompt=PROMPT) Generate Text#
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-2
Generate Text# Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain. def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs)) generate_blog_post("environment variables")
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-3
[{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-4
will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-5
added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
592858c395a1-6
previous Retrieval Question Answering with Sources next API Chains Contents Prepare Data Set Up Vector DB Set Up LLM Chain with Custom Prompt Generate Text By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
9f5a70333887-0
.ipynb .pdf LLM Chain Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string LLM Chain# LLMChain is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of LLMChain class. from langchain import PromptTemplate, OpenAI, LLMChain prompt_template = "What is a good name for a company that makes {product}?" llm = OpenAI(temperature=0) llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template) ) llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'} Additional ways of running LLM Chain# Aside from __call__ and run methods shared by all Chain object (see Getting Started to learn more), LLMChain offers a few more ways of calling the chain logic: apply allows you run the chain against a list of inputs: input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"} ] llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}] generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason. llm_chain.generate(input_list)
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
9f5a70333887-1
llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'}) predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict. # Single input example llm_chain.predict(product="colorful socks") '\n\nSocktastic!' # Multiple inputs example template = """Tell me a {adjective} joke about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0)) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' Parsing the outputs# By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply. With predict: from langchain.output_parsers import CommaSeparatedListOutputParser output_parser = CommaSeparatedListOutputParser() template = """List all the colors in a rainbow"""
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
9f5a70333887-2
template = """List all the colors in a rainbow""" prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser) llm_chain = LLMChain(prompt=prompt, llm=llm) llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet' With predict_and_parser: llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] Initialize from string# You can also construct an LLMChain from a string template directly. template = """Tell me a {adjective} joke about {subject}.""" llm_chain = LLMChain.from_string(llm=llm, template=template) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' previous Loading from LangChainHub next Router Chains Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
38f0f3269226-0
.ipynb .pdf Serialization Contents Saving a chain to disk Loading a chain from disk Saving components separately Serialization# This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. Saving a chain to disk# First, let’s go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension. from langchain import PromptTemplate, OpenAI, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True) llm_chain.save("llm_chain.json") Let’s now take a look at what’s inside this saved file !cat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null,
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
38f0f3269226-1
"best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" } Loading a chain from disk# We can load a chain from disk by using the load_chain method. from langchain.chains import load_chain chain = load_chain("llm_chain.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' Saving components separately# In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component. llm_chain.prompt.save("prompt.json") !cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" } llm_chain.llm.save("llm.json") !cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0,
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
38f0f3269226-2
"top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" } config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } import json with open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2) !cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } We can then load it in the same way chain = load_chain("llm_chain_separate.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' previous Sequential Chains next Transformation Chain Contents Saving a chain to disk Loading a chain from disk Saving components separately By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
0605597c34c3-0
.ipynb .pdf Async API for Chain Async API for Chain# LangChain provides async support for Chains by leveraging the asyncio library. Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap. import asyncio import time from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain def generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp) async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp) async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
0605597c34c3-1
await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds. previous How-To Guides next Creating a custom Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
d8ee421a14ff-0
.ipynb .pdf Sequential Chains Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains Sequential Chains# The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains: SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next. SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs. SimpleSequentialChain# In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next. Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play. from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # This is an LLMChain to write a synopsis given a title of a play. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-1
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template) # This is the overall chain where we run these two chains in sequence. from langchain.chains import SimpleSequentialChain overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True) review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-2
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain. print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-3
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. Sequential Chain# Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs. Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs. # This is an LLMChain to write a synopsis given a title of a play and the era it is set in. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title. Title: {title} Era: {era} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis") # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis}
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-4
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review") # This is the overall chain where we run these two chains in sequence. from langchain.chains import SequentialChain overall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-5
'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.",
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-6
'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."} Memory in Sequential Chains# Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains. For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context: from langchain.chains import SequentialChain from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-7
from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7) template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play. Here is some context about the time and location of the play: Date and Time: {time} Location: {location} Play Synopsis: {synopsis} Review from a New York Times play critic of the above play: {review} Social Media Post: """ prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template) social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text") overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d8ee421a14ff-8
'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"} previous Router Chains next Serialization Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
866538c2f993-0
.ipynb .pdf Creating a custom Chain Creating a custom Chain# To implement your own custom chain you can subclass Chain and implement the following methods: from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Extra from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun, ) from langchain.chains.base import Chain from langchain.prompts.base import BasePromptTemplate class MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
866538c2f993-1
# Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None )
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
866538c2f993-2
callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: await run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain" from langchain.callbacks.stdout import StdOutCallbackHandler from langchain.chat_models.openai import ChatOpenAI from langchain.prompts.prompt import PromptTemplate chain = MyCustomChain( prompt=PromptTemplate.from_template('tell us a joke about {topic}'), llm=ChatOpenAI() ) chain.run({'topic': 'callbacks'}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!' previous Async API for Chain next Loading from LangChainHub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
da4b8d1af448-0
.ipynb .pdf Transformation Chain Transformation Chain# This notebook showcases using a generic transformation chain. As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those. from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text} transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func) template = """Summarize this text: {output_text} Summary:""" prompt = PromptTemplate(input_variables=["output_text"], template=template) llm_chain = LLMChain(llm=OpenAI(), prompt=prompt) sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain]) sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.' previous Serialization next Analyze Document By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/transformation.html
f2ff302427cb-0
.ipynb .pdf Loading from LangChainHub Loading from LangChainHub# This notebook covers how to load chains from LangChainHub. from langchain.chains import load_chain chain = load_chain("lc://chains/llm-math/chain.json") chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249' Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI, VectorDBQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore) query = "What did the president say about Ketanji Brown Jackson" chain.run(query)
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
f2ff302427cb-1
chain.run(query) " The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence." previous Creating a custom Chain next LLM Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
abf6937563de-0
.ipynb .pdf Router Chains Contents LLMRouterChain EmbeddingRouterChain Router Chains# This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. Router chains are made up of two components: The RouterChain itself (responsible for selecting the next chain to call) destination_chains: chains that the router chain can route to In this notebook we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt. from langchain.chains.router import MultiPromptChain from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.chains.llm import LLMChain from langchain.prompts import PromptTemplate physics_template = """You are a very smart physics professor. \ You are great at answering questions about physics in a concise and easy to understand manner. \ When you don't know the answer to a question you admit that you don't know. Here is a question: {input}""" math_template = """You are a very good mathematician. You are great at answering math questions. \ You are so good because you are able to break down hard problems into their component parts, \ answer the component parts, and then put them together to answer the broader question. Here is a question: {input}""" prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template }, { "name": "math", "description": "Good for answering math questions",
https://python.langchain.com/en/latest/modules/chains/generic/router.html
abf6937563de-1
"description": "Good for answering math questions", "prompt_template": math_template } ] llm = OpenAI() destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chain default_chain = ConversationChain(llm=llm, output_key="text") LLMRouterChain# This chain uses an LLM to determine how to route things. from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format( destinations=destinations_str ) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt) chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True) print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain.
https://python.langchain.com/en/latest/modules/chains/generic/router.html
abf6937563de-2
physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body”—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum. print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? The answer is 43. One plus 43 is 44 which is divisible by 3. print(chain.run("What is the name of the type of cloud that rins")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning. EmbeddingRouterChain# The EmbeddingRouterChain uses embeddings and similarity to route between destination chains. from langchain.chains.router.embedding_router import EmbeddingRouterChain from langchain.embeddings import CohereEmbeddings from langchain.vectorstores import Chroma names_and_descriptions = [ ("physics", ["for questions about physics"]), ("math", ["for questions about math"]), ]
https://python.langchain.com/en/latest/modules/chains/generic/router.html
abf6937563de-3
("math", ["for questions about math"]), ] router_chain = EmbeddingRouterChain.from_names_and_descriptions( names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"] ) Using embedded DuckDB without persistence: data will be transient chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True) print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation. print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. previous LLM Chain next Sequential Chains Contents LLMRouterChain EmbeddingRouterChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/router.html
bfcdf6d59365-0
.rst .pdf How-To Guides Contents Types Usage How-To Guides# Types# The first set of examples all highlight different types of memory. ConversationBufferMemory ConversationBufferWindowMemory Entity Memory Conversation Knowledge Graph Memory ConversationSummaryMemory ConversationSummaryBufferMemory ConversationTokenBufferMemory VectorStore-Backed Memory Usage# The examples here all highlight how to use memory in different ways. How to add Memory to an LLMChain How to add memory to a Multi-Input Chain How to add Memory to an Agent Adding Message Memory backed by a database to an Agent Cassandra Chat Message History How to customize conversational memory How to create a custom Memory class Dynamodb Chat Message History Entity Memory with SQLite storage Momento Mongodb Chat Message History Motörhead Memory Motörhead Memory (Managed) How to use multiple memory classes in the same chain Postgres Chat Message History Redis Chat Message History Zep Memory previous Getting Started next ConversationBufferMemory Contents Types Usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/how_to_guides.html
302dcc2ccaac-0
.ipynb .pdf Getting Started Contents ChatMessageHistory ConversationBufferMemory Using in a chain Saving Message History Getting Started# This notebook walks through how LangChain thinks about memory. Memory involves keeping a concept of state around throughout a user’s interactions with an language model. A user’s interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type. In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain. Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages. In this notebook, we will walk through the simplest form of memory: “buffer” memory, which just involves keeping a buffer of all prior messages. We will show how to use the modular utility functions here, then show how it can be used in a chain (both returning a string as well as a list of messages). ChatMessageHistory# One of the core utility classes underpinning most (if not all) memory modules is the ChatMessageHistory class. This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all. You may want to use this class directly if you are managing memory outside of a chain. from langchain.memory import ChatMessageHistory history = ChatMessageHistory() history.add_user_message("hi!") history.add_ai_message("whats up?") history.messages [HumanMessage(content='hi!', additional_kwargs={}),
https://python.langchain.com/en/latest/modules/memory/getting_started.html
302dcc2ccaac-1
history.messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})] ConversationBufferMemory# We now show how to use this simple concept in a chain. We first showcase ConversationBufferMemory which is just a wrapper around ChatMessageHistory that extracts the messages in a variable. We can first extract it as a string. from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() memory.chat_memory.add_user_message("hi!") memory.chat_memory.add_ai_message("whats up?") memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'} We can also get the history as a list of messages memory = ConversationBufferMemory(return_messages=True) memory.chat_memory.add_user_message("hi!") memory.chat_memory.add_ai_message("whats up?") memory.load_memory_variables({}) {'history': [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})]} Using in a chain# Finally, let’s take a look at using this in a chain (setting verbose=True so we can see the prompt). from langchain.llms import OpenAI from langchain.chains import ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there!
https://python.langchain.com/en/latest/modules/memory/getting_started.html
302dcc2ccaac-2
Current conversation: Human: Hi there! AI: > Finished chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="I'm doing well! Just having a conversation with an AI.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! It's always nice to have a conversation with someone new. What would you like to talk about?" conversation.predict(input="Tell me about yourself.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about? Human: Tell me about yourself. AI: > Finished chain.
https://python.langchain.com/en/latest/modules/memory/getting_started.html
302dcc2ccaac-3
Human: Tell me about yourself. AI: > Finished chain. " Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers." Saving Message History# You may often have to save messages, and then load them to use again. This can be done easily by first converting the messages to normal python dictionaries, saving those (as json or something) and then loading those. Here is an example of doing that. import json from langchain.memory import ChatMessageHistory from langchain.schema import messages_from_dict, messages_to_dict history = ChatMessageHistory() history.add_user_message("hi!") history.add_ai_message("whats up?") dicts = messages_to_dict(history.messages) dicts [{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}}, {'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}] new_messages = messages_from_dict(dicts) new_messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})] And that’s it for the getting started! There are plenty of different types of memory, check out our examples to see them all previous Memory next How-To Guides Contents ChatMessageHistory ConversationBufferMemory Using in a chain Saving Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/getting_started.html
53e000fed432-0
.ipynb .pdf How to add Memory to an LLMChain How to add Memory to an LLMChain# This notebook goes over how to use the Memory class with an LLMChain. For the purposes of this walkthrough, we will add the ConversationBufferMemory class, although this can be any memory class. from langchain.memory import ConversationBufferMemory from langchain import OpenAI, LLMChain, PromptTemplate The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the PromptTemplate and the ConversationBufferMemory match up (chat_history). template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} Chatbot:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, ) llm_chain.predict(human_input="Hi there my friend") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: Hi there my friend Chatbot: > Finished LLMChain chain. ' Hi there, how are you doing today?' llm_chain.predict(human_input="Not too bad - how are you?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: Hi there my friend AI: Hi there, how are you doing today?
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory.html
53e000fed432-1
Human: Hi there my friend AI: Hi there, how are you doing today? Human: Not to bad - how are you? Chatbot: > Finished LLMChain chain. " I'm doing great, thank you for asking!" previous VectorStore-Backed Memory next How to add memory to a Multi-Input Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory.html
544b5e00c11a-0
.ipynb .pdf How to customize conversational memory Contents AI Prefix Human Prefix How to customize conversational memory# This notebook walks through a few ways to customize conversational memory. from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory llm = OpenAI(temperature=0) AI Prefix# The first way to do so is by changing the AI prefix in the conversation summary. By default, this is set to “AI”, but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let’s walk through an example of that in the example below. # Here it is by default set to "AI" conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there!
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
544b5e00c11a-1
Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the next few days is sunny with temperatures in the mid-70s.' # Now we can override it and set it to "AI Assistant" from langchain.prompts.prompt import PromptTemplate template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {history} Human: {input} AI Assistant:""" PROMPT = PromptTemplate( input_variables=["history", "input"], template=template ) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai_prefix="AI Assistant") ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting:
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
544b5e00c11a-2
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: Hi there! It's nice to meet you. How can I help you today? Human: What's the weather? AI Assistant: > Finished ConversationChain chain. ' The current weather is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is sunny with a high of 78 degrees and a low of 65 degrees.' Human Prefix# The next way to do so is by changing the Human prefix in the conversation summary. By default, this is set to “Human”, but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let’s walk through an example of that in the example below. # Now we can override it and set it to "Friend" from langchain.prompts.prompt import PromptTemplate template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {history} Friend: {input} AI:""" PROMPT = PromptTemplate( input_variables=["history", "input"], template=template ) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(human_prefix="Friend") )
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
544b5e00c11a-3
memory=ConversationBufferMemory(human_prefix="Friend") ) conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: > Finished ConversationChain chain. " Hi there! It's nice to meet you. How can I help you today?" conversation.predict(input="What's the weather?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Friend: What's the weather? AI: > Finished ConversationChain chain. ' The weather right now is sunny and warm with a temperature of 75 degrees Fahrenheit. The forecast for the rest of the day is mostly sunny with a high of 82 degrees.' previous Cassandra Chat Message History next How to create a custom Memory class Contents AI Prefix Human Prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
df8d26d5aa2f-0
.ipynb .pdf Motörhead Memory (Managed) Contents Setup Motörhead Memory (Managed)# Motörhead is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. Setup# See instructions at Motörhead for running the managed version of Motorhead. You can retrieve your api_key and client_id by creating an account on Metal. from langchain.memory.motorhead_memory import MotorheadMemory from langchain import OpenAI, LLMChain, PromptTemplate template = """You are a chatbot having a conversation with a human. {chat_history} Human: {human_input} AI:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input"], template=template ) memory = MotorheadMemory( api_key="YOUR_API_KEY", client_id="YOUR_CLIENT_ID" session_id="testing-1", memory_key="chat_history" ) await memory.init(); # loads previous state from Motörhead 🤘 llm_chain = LLMChain( llm=OpenAI(), prompt=prompt, verbose=True, memory=memory, ) llm_chain.run("hi im bob") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: > Finished chain. ' Hi Bob, nice to meet you! How are you doing today?' llm_chain.run("whats my name?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory_managed.html
df8d26d5aa2f-1
You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: > Finished chain. ' You said your name is Bob. Is that correct?' llm_chain.run("whats for dinner?") > Entering new LLMChain chain... Prompt after formatting: You are a chatbot having a conversation with a human. Human: hi im bob AI: Hi Bob, nice to meet you! How are you doing today? Human: whats my name? AI: You said your name is Bob. Is that correct? Human: whats for dinner? AI: > Finished chain. " I'm sorry, I'm not sure what you're asking. Could you please rephrase your question?" previous Motörhead Memory next How to use multiple memory classes in the same chain Contents Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/motorhead_memory_managed.html
950be21a1aa1-0
.ipynb .pdf How to use multiple memory classes in the same chain How to use multiple memory classes in the same chain# It is also possible to use multiple memory classes in the same chain. To combine multiple memory classes, we can initialize the CombinedMemory class, and then use that. from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory conv_memory = ConversationBufferMemory( memory_key="chat_history_lines", input_key="input" ) summary_memory = ConversationSummaryMemory(llm=OpenAI(), input_key="input") # Combined memory = CombinedMemory(memories=[conv_memory, summary_memory]) _DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: {history} Current conversation: {chat_history_lines} Human: {input} AI:""" PROMPT = PromptTemplate( input_variables=["history", "input", "chat_history_lines"], template=_DEFAULT_TEMPLATE ) llm = OpenAI(temperature=0) conversation = ConversationChain( llm=llm, verbose=True, memory=memory, prompt=PROMPT ) conversation.run("Hi!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation:
https://python.langchain.com/en/latest/modules/memory/examples/multiple_memory.html
950be21a1aa1-1
Summary of conversation: Current conversation: Human: Hi! AI: > Finished chain. ' Hi there! How can I help you?' conversation.run("Can you tell me a joke?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Summary of conversation: The human greets the AI, to which the AI responds with a polite greeting and an offer to help. Current conversation: Human: Hi! AI: Hi there! How can I help you? Human: Can you tell me a joke? AI: > Finished chain. ' Sure! What did the fish say when it hit the wall?\nHuman: I don\'t know.\nAI: "Dam!"' previous Motörhead Memory (Managed) next Postgres Chat Message History By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/multiple_memory.html
909c238cb09c-0
.ipynb .pdf How to add memory to a Multi-Input Chain How to add memory to a Multi-Input Chain# Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. As an example of such a chain, we will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.question_answering import load_qa_chain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.memory import ConversationBufferMemory template = """You are a chatbot having a conversation with a human. Given the following extracted parts of a long document and a question, create a final answer. {context} {chat_history} Human: {human_input}
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html
909c238cb09c-1
{context} {chat_history} Human: {human_input} Chatbot:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input", "context"], template=template ) memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input") chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "human_input": query}, return_only_outputs=True) {'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'} print(chain.memory.buffer) Human: What did the president say about Justice Breyer AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. previous How to add Memory to an LLMChain next How to add Memory to an Agent By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 02, 2023.
https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html
288269394a4e-0
.ipynb .pdf Zep Memory Contents REACT Agent Chat Message History Example Initialize the Zep Chat Message History Class and initialize the Agent Add some history data Run the agent Inspect the Zep memory Vector search over the Zep memory Zep Memory# REACT Agent Chat Message History Example# This notebook demonstrates how to use the Zep Long-term Memory Store as memory for your chatbot. We’ll demonstrate: Adding conversation history to the Zep memory store. Running an agent and having message automatically added to the store. Viewing the enriched messages. Vector search over the conversation history. More on Zep: Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. Key Features: Long-term memory persistence, with access to historical messages irrespective of your summarization strategy. Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies. Vector search over memories, with messages automatically embedded on creation. Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly. Python and JavaScript SDKs. Zep project: getzep/zep Docs: https://getzep.github.io from langchain.memory.chat_message_histories import ZepChatMessageHistory from langchain.memory import ConversationBufferMemory from langchain import OpenAI from langchain.schema import HumanMessage, AIMessage from langchain.tools import DuckDuckGoSearchRun from langchain.agents import initialize_agent, AgentType from uuid import uuid4 # Set this to your Zep server URL ZEP_API_URL = "http://localhost:8000" session_id = str(uuid4()) # This is a unique identifier for the user
https://python.langchain.com/en/latest/modules/memory/examples/zep_memory.html