id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
115
d564fca063c0-5
"\n\nStiamo unendo le nostre forze con quelle dei nostri alleati europei per sequestrare yacht, appartamenti di lusso e jet privati di Putin. Abbiamo chiuso lo spazio aereo americano ai voli russi e stiamo fornendo più di un miliardo di dollari in assistenza all'Ucraina. Abbiamo anche mobilitato le nostre forze terrestri, aeree e navali per proteggere i paesi della NATO. Abbiamo anche rilasciato 60 milioni di barili di petrolio dalle riserve di tutto il mondo, di cui 30 milioni dalla nostra riserva strategica di petrolio. Stiamo affrontando una prova reale e ci vorrà del tempo, ma alla fine Putin non riuscirà a spegnere l'amore dei popoli per la libertà.",
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-6
"\n\nIl Presidente Biden ha lottato per passare l'American Rescue Plan per aiutare le persone che soffrivano a causa della pandemia. Il piano ha fornito sollievo economico immediato a milioni di americani, ha aiutato a mettere cibo sulla loro tavola, a mantenere un tetto sopra le loro teste e a ridurre il costo dell'assicurazione sanitaria. Il piano ha anche creato più di 6,5 milioni di nuovi posti di lavoro, il più alto numero di posti di lavoro creati in un anno nella storia degli Stati Uniti. Il Presidente Biden ha anche firmato la legge bipartitica sull'infrastruttura, la più ampia iniziativa di ricostruzione della storia degli Stati Uniti. Il piano prevede di modernizzare le strade, gli aeroporti, i porti e le vie navigabili in"], 'output_text': "\n\nIl Presidente Biden sta lavorando per aiutare le persone che soffrono a causa della pandemia attraverso l'American Rescue Plan e la legge bipartitica sull'infrastruttura. Gli Stati Uniti e i loro alleati stanno anche imponendo sanzioni economiche a Putin e tagliando l'accesso della Russia alla tecnologia. Stanno anche sequestrando yacht, appartamenti di lusso e jet privati di Putin e fornendo più di un miliardo di dollari in assistenza all'Ucraina. Alla fine, Putin non riuscirà a spegnere l'amore dei popoli per la libertà."} The refine Chain# This sections shows results of using the refine Chain to do summarization.
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-7
The refine Chain# This sections shows results of using the refine Chain to do summarization. chain = load_summarize_chain(llm, chain_type="refine") chain.run(docs) "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This investment will" Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable. chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-8
chain({"input_documents": docs}, return_only_outputs=True) {'refine_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.", "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. Putin's war on Ukraine has left Russia weaker and the rest of the world stronger, with the world uniting in support of democracy and peace.",
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-9
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"],
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-10
'output_text': "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Write a concise summary of the following: {text} CONCISE SUMMARY IN ITALIAN:""" PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) refine_template = ( "Your job is to produce a final summary\n" "We have provided an existing summary up to a certain point: {existing_answer}\n" "We have the opportunity to refine the existing summary" "(only if needed) with some more context below.\n" "------------\n"
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-11
"------------\n" "{text}\n" "------------\n" "Given the new context, refine the original summary in Italian" "If the context isn't useful, return the original summary." ) refine_prompt = PromptTemplate( input_variables=["existing_answer", "text"], template=refine_template, ) chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=PROMPT, refine_prompt=refine_prompt) chain({"input_documents": docs}, return_only_outputs=True) {'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia e bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-12
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare,",
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-13
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."],
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
d564fca063c0-14
'output_text': "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."} previous Question Answering next Retrieval Question/Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
94ee795cf875-0
.ipynb .pdf Question Answering with Sources Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering with Sources# This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.qa_with_sources import load_qa_with_sources_chain
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-1
from langchain.chains.qa_with_sources import load_qa_with_sources_chain from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-2
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"]) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' None', ' None', ' None'],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-3
' None', ' None', ' None'], 'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text in Italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", ' Non pertinente.', ' Non rilevante.', " Non c'è testo pertinente."], 'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-6
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.', '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-7
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-8
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-9
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer, including sources: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question (in Italian)"
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-10
"answer the question (in Italian)" "If you do update it, please update the sources as well. " "If the context isn't useful, return the original answer." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_template, ) question_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question in Italian: {question}\n" ) question_prompt = PromptTemplate( input_variables=["context_str", "question"], template=question_template ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.',
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-11
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-12
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-13
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-14
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.' result["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'},
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-15
'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_rerank", metadata_keys=['source'], return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" result = chain({"input_documents": docs, "question": query}, return_only_outputs=True) result {'source': 30,
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
94ee795cf875-16
result {'source': 30, 'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Il presidente non ha detto nulla sulla giustizia Breyer.', 'score': '100'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha onorato la sua carriera.'} previous Hypothetical Document Embeddings next Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
96bc3995775f-0
.ipynb .pdf Hypothetical Document Embeddings Contents Multiple generations Using our own prompts Using HyDE Hypothetical Document Embeddings# This notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper. At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own. from langchain.llms import OpenAI from langchain.embeddings import OpenAIEmbeddings from langchain.chains import LLMChain, HypotheticalDocumentEmbedder from langchain.prompts import PromptTemplate base_embeddings = OpenAIEmbeddings() llm = OpenAI() # Load with `web_search` prompt embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search") # Now we can use it as any embedding class! result = embeddings.embed_query("Where is the Taj Mahal?") Multiple generations# We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things. multi_llm = OpenAI(n=4, best_of=4) embeddings = HypotheticalDocumentEmbedder.from_llm(multi_llm, base_embeddings, "web_search") result = embeddings.embed_query("Where is the Taj Mahal?") Using our own prompts#
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
96bc3995775f-1
result = embeddings.embed_query("Where is the Taj Mahal?") Using our own prompts# Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that. In the example below, let’s condition it to generate text about a state of the union address (because we will use that in the next example). prompt_template = """Please answer the user's question about the most recent state of the union address Question: {question} Answer:""" prompt = PromptTemplate(input_variables=["question"], template=prompt_template) llm_chain = LLMChain(llm=llm, prompt=prompt) embeddings = HypotheticalDocumentEmbedder(llm_chain=llm_chain, base_embeddings=base_embeddings) result = embeddings.embed_query("What did the president say about Ketanji Brown Jackson") Using HyDE# Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example. from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
96bc3995775f-2
Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Graph QA next Question Answering with Sources Contents Multiple generations Using our own prompts Using HyDE By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/hyde.html
4c7bb8cdf212-0
.ipynb .pdf Chat Over Documents with Chat History Contents Pass in chat history Return Source Documents ConversationalRetrievalChain with search_distance ConversationalRetrievalChain with map_reduce ConversationalRetrievalChain with Question Answering with sources ConversationalRetrievalChain with streaming to stdout get_chat_history Function Chat Over Documents with Chat History# This notebook goes over how to set up a chain to chat over documents with chat history using a ConversationalRetrievalChain. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import ConversationalRetrievalChain Load in documents. You can replace this with a loader for whatever type of data you want from langchain.document_loaders import TextLoader loader = TextLoader("../../state_of_the_union.txt") documents = loader.load() If you had multiple loaders that you wanted to combine, you do something like: # loaders = [....] # docs = [] # for loader in loaders: # docs.extend(loader.load()) We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them. text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents, embeddings) Using embedded DuckDB without persistence: data will be transient
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-1
Using embedded DuckDB without persistence: data will be transient We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation. from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) We now initialize the ConversationalRetrievalChain qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), memory=memory) query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query}) result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." query = "Did he mention who she suceeded" result = qa({"question": query}) result['answer'] ' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.' Pass in chat history# In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object. qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever()) Here’s an example of asking a question with no chat history chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result["answer"]
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-2
result = qa({"question": query, "chat_history": chat_history}) result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Here’s an example of asking a question with some chat history chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) result['answer'] ' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.' Return Source Documents# You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned. qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['source_documents'][0]
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-3
result['source_documents'][0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../state_of_the_union.txt'}) ConversationalRetrievalChain with search_distance# If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter. vectordbkwargs = {"search_distance": 0.9} qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs}) ConversationalRetrievalChain with map_reduce# We can also use different types of combine document chains with the ConversationalRetrievalChain chain. from langchain.chains import LLMChain from langchain.chains.question_answering import load_qa_chain
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-4
from langchain.chains.question_answering import load_qa_chain from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(llm, chain_type="map_reduce") chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." ConversationalRetrievalChain with Question Answering with sources# You can also use this chain with the question answering with sources chain. from langchain.chains.qa_with_sources import load_qa_with_sources_chain llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce") chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = []
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-5
combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt" ConversationalRetrievalChain with streaming to stdout# Output from the chain will be streamed to stdout token by token in this example. from langchain.chains.llm import LLMChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT from langchain.chains.question_answering import load_qa_chain # Construct a ConversationalRetrievalChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation llm = OpenAI(temperature=0) streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT) qa = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator) chat_history = []
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-6
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court. get_chat_history Function# You can also specify a get_chat_history function, which can be used to format the chat_history string. def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res) qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), get_chat_history=get_chat_history) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['answer']
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
4c7bb8cdf212-7
result = qa({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." previous Analyze Document next Graph QA Contents Pass in chat history Return Source Documents ConversationalRetrievalChain with search_distance ConversationalRetrievalChain with map_reduce ConversationalRetrievalChain with Question Answering with sources ConversationalRetrievalChain with streaming to stdout get_chat_history Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
782f2811ec8e-0
.ipynb .pdf Question Answering Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering# This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: stuff, map_reduce, refine, map_rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate from langchain.indexes.vectorstore import VectorstoreIndexCreator with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.get_relevant_documents(query) from langchain.chains.question_answering import load_qa_chain from langchain.llms import OpenAI Quickstart#
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-1
from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.' If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-2
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'],
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-3
' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text translated into italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian. If you don't know the answer, just say that you don't know. Don't try to make up an answer. QUESTION: {question} ========= {summaries} ========= Answer in Italian:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable. chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.',
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-6
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template, ) initial_qa_template = ( "Context information is below. \n"
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-7
) initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n" ) initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.",
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-8
"\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"],
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-9
'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"} The map-rerank Chain# This sections shows results of using the map-rerank Chain to do question answering with sources. chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True) query = "What did the president say about Justice Breyer" results = chain({"input_documents": docs, "question": query}, return_only_outputs=True) results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.' results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'},
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-10
{'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}] Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. from langchain.output_parsers import RegexParser output_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"], ) prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format: Question: [question here] Helpful Answer In Italian: [answer here] Score: [score between 0 and 100] Begin! Context: --------- {context} --------- Question: {question} Helpful Answer In Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser, ) chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'},
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
782f2811ec8e-11
'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'} previous Question Answering with Sources next Summarization Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/question_answering.html
7e37be0ae203-0
.ipynb .pdf Analyze Document Contents Summarize Question Answering Analyze Document# The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain. with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() Summarize# Let’s take a look at it in action below, using it summarize a long document. from langchain import OpenAI from langchain.chains.summarize import load_summarize_chain llm = OpenAI(temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") from langchain.chains import AnalyzeDocumentChain summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain) summarize_document_chain.run(state_of_the_union) " In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America." Question Answering# Let’s take a look at this using a question answering chain. from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(llm, chain_type="map_reduce")
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
7e37be0ae203-1
qa_chain = load_qa_chain(llm, chain_type="map_reduce") qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain) qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.' previous Transformation Chain next Chat Over Documents with Chat History Contents Summarize Question Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/analyze_document.html
975f2344fbb4-0
.ipynb .pdf Retrieval Question/Answering Contents Chain Type Custom Prompts Return Source Documents Retrieval Question/Answering# This example showcases question answering over an index. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import RetrievalQA from langchain.document_loaders import TextLoader loader = TextLoader("../../state_of_the_union.txt") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Chain Type# You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-1
There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example: from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff") qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever()) query = "What did the president say about Ketanji Brown Jackson" qa.run(query)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-2
query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Custom Prompts# You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chain from langchain.prompts import PromptTemplate prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain_type_kwargs = {"prompt": PROMPT} qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani." Return Source Documents#
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-3
Return Source Documents# Additionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain. qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True) query = "What did the president say about Ketanji Brown Jackson" result = qa({"query": query}) result["result"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." result["source_documents"] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-4
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-5
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
975f2344fbb4-6
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] previous Summarization next Retrieval Question Answering with Sources Contents Chain Type Custom Prompts Return Source Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html
4e847effc67a-0
.ipynb .pdf Graph QA Contents Create the graph Querying the graph Save the graph Graph QA# This notebook goes over how to do question answering over a graph data structure. Create the graph# In this section, we construct an example graph. At the moment, this works best for small pieces of text. from langchain.indexes import GraphIndexCreator from langchain.llms import OpenAI from langchain.document_loaders import TextLoader index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) with open("../../state_of_the_union.txt") as f: all_text = f.read() We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. text = "\n".join(all_text.split("\n\n")[105:108]) text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' graph = index_creator.from_text(text) We can inspect the created graph. graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] Querying the graph#
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
4e847effc67a-1
'is the ground on which')] Querying the graph# We can now use the graph QA chain to ask question of the graph from langchain.chains import GraphQAChain chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' Save the graph# We can also save and load the graph. graph.write_to_gml("graph.gml") from langchain.indexes.graph import NetworkxEntityGraph loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] previous Chat Over Documents with Chat History next Hypothetical Document Embeddings Contents Create the graph Querying the graph Save the graph By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
4e847effc67a-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html
3049636f7def-0
.ipynb .pdf Retrieval Question Answering with Sources Contents Chain Type Retrieval Question Answering with Sources# This notebook goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. from langchain.chains import RetrievalQAWithSourcesChain from langchain import OpenAI chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'} Chain Type#
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
3049636f7def-1
'sources': '31-pl'} Chain Type# You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see this notebook. There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce. chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="map_reduce", retriever=docsearch.as_retriever()) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president said "Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."\n', 'sources': '31-pl'} The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the combine_documents_chain parameter. For example: from langchain.chains.qa_with_sources import load_qa_with_sources_chain qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever()) qa({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
3049636f7def-2
{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'} previous Retrieval Question/Answering next Vector DB Text Generation Contents Chain Type By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html
fd42d371f50d-0
.ipynb .pdf Vector DB Text Generation Contents Prepare Data Set Up Vector DB Set Up LLM Chain with Custom Prompt Generate Text Vector DB Text Generation# This notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation. Prepare Data# First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents. from langchain.llms import OpenAI from langchain.docstore.document import Document import requests from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.prompts import PromptTemplate import pathlib import subprocess import tempfile def get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path)
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-1
relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url}) sources = get_github_docs("yirenlu92", "deno-manual-forked") source_chunks = [] splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0) for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'... Set Up Vector DB# Now that we have the documentation content in chunks, let’s put all this information in a vector index for easy retrieval. search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings()) Set Up LLM Chain with Custom Prompt# Next, let’s set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user. from langchain.chains import LLMChain prompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "topic"] ) llm = OpenAI(temperature=0) chain = LLMChain(llm=llm, prompt=PROMPT) Generate Text#
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-2
chain = LLMChain(llm=llm, prompt=PROMPT) Generate Text# Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain. def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs)) generate_blog_post("environment variables")
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-3
[{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-4
into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno,
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-5
to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language,
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-6
variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO`
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-7
example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
fd42d371f50d-8
previous Retrieval Question Answering with Sources next API Chains Contents Prepare Data Set Up Vector DB Set Up LLM Chain with Custom Prompt Generate Text By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_text_generation.html
90f3c81b2521-0
.ipynb .pdf LLM Chain Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string LLM Chain# LLMChain is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of LLMChain class. from langchain import PromptTemplate, OpenAI, LLMChain prompt_template = "What is a good name for a company that makes {product}?" llm = OpenAI(temperature=0) llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template) ) llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'} Additional ways of running LLM Chain# Aside from __call__ and run methods shared by all Chain object (see Getting Started to learn more), LLMChain offers a few more ways of calling the chain logic: apply allows you run the chain against a list of inputs: input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"} ] llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}] generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason. llm_chain.generate(input_list)
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
90f3c81b2521-1
llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'}) predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict. # Single input example llm_chain.predict(product="colorful socks") '\n\nSocktastic!' # Multiple inputs example template = """Tell me a {adjective} joke about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0)) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' Parsing the outputs# By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply. With predict: from langchain.output_parsers import CommaSeparatedListOutputParser output_parser = CommaSeparatedListOutputParser() template = """List all the colors in a rainbow"""
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
90f3c81b2521-2
template = """List all the colors in a rainbow""" prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser) llm_chain = LLMChain(prompt=prompt, llm=llm) llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet' With predict_and_parser: llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] Initialize from string# You can also construct an LLMChain from a string template directly. template = """Tell me a {adjective} joke about {subject}.""" llm_chain = LLMChain.from_string(llm=llm, template=template) llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' previous Loading from LangChainHub next Sequential Chains Contents LLM Chain Additional ways of running LLM Chain Parsing the outputs Initialize from string By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/llm_chain.html
58bfe67cbf1f-0
.ipynb .pdf Serialization Contents Saving a chain to disk Loading a chain from disk Saving components separately Serialization# This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. Saving a chain to disk# First, let’s go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension. from langchain import PromptTemplate, OpenAI, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True) llm_chain.save("llm_chain.json") Let’s now take a look at what’s inside this saved file !cat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null,
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
58bfe67cbf1f-1
"best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" } Loading a chain from disk# We can load a chain from disk by using the load_chain method. from langchain.chains import load_chain chain = load_chain("llm_chain.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' Saving components separately# In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component. llm_chain.prompt.save("prompt.json") !cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" } llm_chain.llm.save("llm.json") !cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0,
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
58bfe67cbf1f-2
"top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" } config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } import json with open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2) !cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } We can then load it in the same way chain = load_chain("llm_chain_separate.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' previous Sequential Chains next Transformation Chain Contents Saving a chain to disk Loading a chain from disk Saving components separately By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/serialization.html
82f4a2b7a655-0
.ipynb .pdf Creating a custom Chain Creating a custom Chain# To implement your own custom chain you can subclass Chain and implement the following methods: from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Extra from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun, ) from langchain.chains.base import Chain from langchain.prompts.base import BasePromptTemplate class MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
82f4a2b7a655-1
# Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None )
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
82f4a2b7a655-2
callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager: await run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain" from langchain.callbacks.stdout import StdOutCallbackHandler from langchain.chat_models.openai import ChatOpenAI from langchain.prompts.prompt import PromptTemplate chain = MyCustomChain( prompt=PromptTemplate.from_template('tell us a joke about {topic}'), llm=ChatOpenAI() ) chain.run({'topic': 'callbacks'}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!' previous Async API for Chain next Loading from LangChainHub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html
2602798001b0-0
.ipynb .pdf Async API for Chain Async API for Chain# LangChain provides async support for Chains by leveraging the asyncio library. Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap. import asyncio import time from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain def generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp) async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp) async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
2602798001b0-1
await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds. previous How-To Guides next Creating a custom Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/async_chain.html
5c8fab61d0f7-0
.ipynb .pdf Loading from LangChainHub Loading from LangChainHub# This notebook covers how to load chains from LangChainHub. from langchain.chains import load_chain chain = load_chain("lc://chains/llm-math/chain.json") chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249' Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI, VectorDBQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore) query = "What did the president say about Ketanji Brown Jackson" chain.run(query)
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
5c8fab61d0f7-1
query = "What did the president say about Ketanji Brown Jackson" chain.run(query) " The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence." previous Creating a custom Chain next LLM Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/from_hub.html
371ee8ee9333-0
.ipynb .pdf Transformation Chain Transformation Chain# This notebook showcases using a generic transformation chain. As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those. from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read() def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text} transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func) template = """Summarize this text: {output_text} Summary:""" prompt = PromptTemplate(input_variables=["output_text"], template=template) llm_chain = LLMChain(llm=OpenAI(), prompt=prompt) sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain]) sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.' previous Serialization next Analyze Document By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/transformation.html
bd16129d4e81-0
.ipynb .pdf Sequential Chains Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains Sequential Chains# The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains: SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next. SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs. SimpleSequentialChain# In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next. Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play. from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # This is an LLMChain to write a synopsis given a title of a play. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-1
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template) # This is the overall chain where we run these two chains in sequence. from langchain.chains import SimpleSequentialChain overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True) review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-2
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain. print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-3
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. Sequential Chain# Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs. Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs. # This is an LLMChain to write a synopsis given a title of a play and the era it is set in. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title. Title: {title} Era: {era} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis") # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis}
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-4
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review") # This is the overall chain where we run these two chains in sequence. from langchain.chains import SequentialChain overall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-5
'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.",
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-6
'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."} Memory in Sequential Chains# Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains. For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context: from langchain.chains import SequentialChain from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7)
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-7
from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7) template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play. Here is some context about the time and location of the play: Date and Time: {time} Location: {location} Play Synopsis: {synopsis} Review from a New York Times play critic of the above play: {review} Social Media Post: """ prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template) social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text") overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park',
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
bd16129d4e81-8
'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"} previous LLM Chain next Serialization Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/chains/generic/sequential_chains.html
d04b74d6fb51-0
.ipynb .pdf Callbacks Contents Callbacks How to use callbacks When do you want to use each of these? Using an existing handler Creating a custom handler Async Callbacks Using multiple handlers, passing in handlers Tracing and Token Counting Tracing Token Counting Callbacks# LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. You can subscribe to these events by using the callbacks argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. There are two main callbacks mechanisms: Constructor callbacks will be used for all calls made on that object, and will be scoped to that object only, i.e. if you pass a handler to the LLMChain constructor, it will not be used by the model attached to that chain. Request callbacks will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed through). These are explicitly passed through. Advanced: When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains. _call, _generate, _run, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called run_manager which is bound to that run, and contains the logging methods that can be used by that object (i.e. on_llm_new_token). This is useful when constructing a custom chain. See this guide for more information on how to create custom chains and use callbacks inside them.
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
d04b74d6fb51-1
CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered. class BaseCallbackHandler: """Base callback handler that can be used to handle callbacks from langchain.""" def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> Any: """Run when LLM starts running.""" def on_llm_new_token(self, token: str, **kwargs: Any) -> Any: """Run on new LLM token. Only available when streaming is enabled.""" def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any: """Run when LLM ends running.""" def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when LLM errors.""" def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> Any: """Run when chain starts running.""" def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any: """Run when chain ends running.""" def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when chain errors.""" def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> Any: """Run when tool starts running.""" def on_tool_end(self, output: str, **kwargs: Any) -> Any:
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html
d04b74d6fb51-2
def on_tool_end(self, output: str, **kwargs: Any) -> Any: """Run when tool ends running.""" def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when tool errors.""" def on_text(self, text: str, **kwargs: Any) -> Any: """Run on arbitrary text.""" def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run on agent action.""" def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run on agent end.""" How to use callbacks# The callbacks argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places: Constructor callbacks: defined in the constructor, eg. LLMChain(callbacks=[handler]), which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the LLMChain constructor, it will not be used by the Model attached to that chain. Request callbacks: defined in the call()/run()/apply() methods used for issuing a request, eg. chain.call(inputs, callbacks=[handler]), which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the call() method).
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html