id
stringlengths 14
16
| text
stringlengths 29
2.73k
| source
stringlengths 49
117
|
---|---|---|
be97fb0da2cf-26
|
'table_info': '\nCREATE TABLE "Customer" (\n\t"CustomerId" INTEGER NOT NULL, \n\t"FirstName" NVARCHAR(40) NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"Company" NVARCHAR(80), \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60) NOT NULL, \n\t"SupportRepId" INTEGER, \n\tPRIMARY KEY ("CustomerId"), \n\tFOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\tEmbraer - Empresa Brasileira de Aeronáutica S.A.\tAv. Brigadeiro Faria Lima, 2170\tSão José dos Campos\tSP\tBrazil\t12227-000\t+55 (12) 3923-5555\t+55 (12) 3923-5566\tluisg@embraer.com.br\t3\n2\tLeonie\tKöhler\tNone\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\tTremblay\tNone\t1498 rue
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-27
|
rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\tNone\tftremblay@gmail.com\t3\n*/',
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-28
|
'stop': ['\nSQLResult:']},
'SELECT count(*) FROM Customer',
{'query': 'SELECT count(*) FROM Customer', 'dialect': 'sqlite'},
'SELECT count(*) FROM Customer',
'[(59,)]']}
Even this relatively large model will most likely fail to generate more complicated SQL by itself. However, you can log its inputs and outputs so that you can hand-correct them and use the corrected examples for few shot prompt examples later. In practice, you could log any executions of your chain that raise exceptions (as shown in the example below) or get direct user feedback in cases where the results are incorrect (but did not raise an exception).
!poetry run pip install pyyaml chromadb
import yaml
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
11842.36s - pydevd: Sending message related to process being replaced timed-out after 5 seconds
Requirement already satisfied: pyyaml in /workspace/langchain/.venv/lib/python3.9/site-packages (6.0)
Requirement already satisfied: chromadb in /workspace/langchain/.venv/lib/python3.9/site-packages (0.3.21)
Requirement already satisfied: pandas>=1.3 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (2.0.1)
Requirement already satisfied: requests>=2.28 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (2.28.2)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-29
|
Requirement already satisfied: pydantic>=1.9 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.10.7)
Requirement already satisfied: hnswlib>=0.7 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.7.0)
Requirement already satisfied: clickhouse-connect>=0.5.7 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.5.20)
Requirement already satisfied: sentence-transformers>=2.2.2 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (2.2.2)
Requirement already satisfied: duckdb>=0.7.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.7.1)
Requirement already satisfied: fastapi>=0.85.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.95.1)
Requirement already satisfied: uvicorn[standard]>=0.18.3 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.21.1)
Requirement already satisfied: numpy>=1.21.6 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.24.3)
Requirement already satisfied: posthog>=2.4.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.0.1)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-30
|
Requirement already satisfied: certifi in /workspace/langchain/.venv/lib/python3.9/site-packages (from clickhouse-connect>=0.5.7->chromadb) (2022.12.7)
Requirement already satisfied: urllib3>=1.26 in /workspace/langchain/.venv/lib/python3.9/site-packages (from clickhouse-connect>=0.5.7->chromadb) (1.26.15)
Requirement already satisfied: pytz in /workspace/langchain/.venv/lib/python3.9/site-packages (from clickhouse-connect>=0.5.7->chromadb) (2023.3)
Requirement already satisfied: zstandard in /workspace/langchain/.venv/lib/python3.9/site-packages (from clickhouse-connect>=0.5.7->chromadb) (0.21.0)
Requirement already satisfied: lz4 in /workspace/langchain/.venv/lib/python3.9/site-packages (from clickhouse-connect>=0.5.7->chromadb) (4.3.2)
Requirement already satisfied: starlette<0.27.0,>=0.26.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from fastapi>=0.85.1->chromadb) (0.26.1)
Requirement already satisfied: python-dateutil>=2.8.2 in /workspace/langchain/.venv/lib/python3.9/site-packages (from pandas>=1.3->chromadb) (2.8.2)
Requirement already satisfied: tzdata>=2022.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from pandas>=1.3->chromadb) (2023.3)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-31
|
Requirement already satisfied: six>=1.5 in /workspace/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.16.0)
Requirement already satisfied: monotonic>=1.5 in /workspace/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.6)
Requirement already satisfied: backoff>=1.10.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.2.1)
Requirement already satisfied: typing-extensions>=4.2.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from pydantic>=1.9->chromadb) (4.5.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /workspace/langchain/.venv/lib/python3.9/site-packages (from requests>=2.28->chromadb) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /workspace/langchain/.venv/lib/python3.9/site-packages (from requests>=2.28->chromadb) (3.4)
Requirement already satisfied: transformers<5.0.0,>=4.6.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (4.28.1)
Requirement already satisfied: tqdm in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (4.65.0)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-32
|
Requirement already satisfied: torch>=1.6.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (1.13.1)
Requirement already satisfied: torchvision in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (0.14.1)
Requirement already satisfied: scikit-learn in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (1.2.2)
Requirement already satisfied: scipy in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (1.9.3)
Requirement already satisfied: nltk in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (3.8.1)
Requirement already satisfied: sentencepiece in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (0.1.98)
Requirement already satisfied: huggingface-hub>=0.4.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from sentence-transformers>=2.2.2->chromadb) (0.13.4)
Requirement already satisfied: click>=7.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (8.1.3)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-33
|
Requirement already satisfied: h11>=0.8 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.14.0)
Requirement already satisfied: httptools>=0.5.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.5.0)
Requirement already satisfied: python-dotenv>=0.13 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0)
Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.17.0)
Requirement already satisfied: watchfiles>=0.13 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0)
Requirement already satisfied: websockets>=10.4 in /workspace/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (11.0.2)
Requirement already satisfied: filelock in /workspace/langchain/.venv/lib/python3.9/site-packages (from huggingface-hub>=0.4.0->sentence-transformers>=2.2.2->chromadb) (3.12.0)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-34
|
Requirement already satisfied: packaging>=20.9 in /workspace/langchain/.venv/lib/python3.9/site-packages (from huggingface-hub>=0.4.0->sentence-transformers>=2.2.2->chromadb) (23.1)
Requirement already satisfied: anyio<5,>=3.4.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from starlette<0.27.0,>=0.26.1->fastapi>=0.85.1->chromadb) (3.6.2)
Requirement already satisfied: nvidia-cuda-runtime-cu11==11.7.99 in /workspace/langchain/.venv/lib/python3.9/site-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (11.7.99)
Requirement already satisfied: nvidia-cudnn-cu11==8.5.0.96 in /workspace/langchain/.venv/lib/python3.9/site-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (8.5.0.96)
Requirement already satisfied: nvidia-cublas-cu11==11.10.3.66 in /workspace/langchain/.venv/lib/python3.9/site-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (11.10.3.66)
Requirement already satisfied: nvidia-cuda-nvrtc-cu11==11.7.99 in /workspace/langchain/.venv/lib/python3.9/site-packages (from torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (11.7.99)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-35
|
Requirement already satisfied: setuptools in /workspace/langchain/.venv/lib/python3.9/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (67.7.1)
Requirement already satisfied: wheel in /workspace/langchain/.venv/lib/python3.9/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch>=1.6.0->sentence-transformers>=2.2.2->chromadb) (0.40.0)
Requirement already satisfied: regex!=2019.12.17 in /workspace/langchain/.venv/lib/python3.9/site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers>=2.2.2->chromadb) (2023.3.23)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers>=2.2.2->chromadb) (0.13.3)
Requirement already satisfied: joblib in /workspace/langchain/.venv/lib/python3.9/site-packages (from nltk->sentence-transformers>=2.2.2->chromadb) (1.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from scikit-learn->sentence-transformers>=2.2.2->chromadb) (3.1.0)
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-36
|
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /workspace/langchain/.venv/lib/python3.9/site-packages (from torchvision->sentence-transformers>=2.2.2->chromadb) (9.5.0)
Requirement already satisfied: sniffio>=1.1 in /workspace/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3.4.0->starlette<0.27.0,>=0.26.1->fastapi>=0.85.1->chromadb) (1.3.0)
from typing import Dict
QUERY = "List all the customer first names that start with 'a'"
def _parse_example(result: Dict) -> Dict:
sql_cmd_key = "sql_cmd"
sql_result_key = "sql_result"
table_info_key = "table_info"
input_key = "input"
final_answer_key = "answer"
_example = {
"input": result.get("query"),
}
steps = result.get("intermediate_steps")
answer_key = sql_cmd_key # the first one
for step in steps:
# The steps are in pairs, a dict (input) followed by a string (output).
# Unfortunately there is no schema but you can look at the input key of the
# dict to see what the output is supposed to be
if isinstance(step, dict):
# Grab the table info from input dicts in the intermediate steps once
if table_info_key not in _example:
_example[table_info_key] = step.get(table_info_key)
if input_key in step:
if step[input_key].endswith("SQLQuery:"):
answer_key = sql_cmd_key # this is the SQL generation input
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-37
|
answer_key = sql_cmd_key # this is the SQL generation input
if step[input_key].endswith("Answer:"):
answer_key = final_answer_key # this is the final answer input
elif sql_cmd_key in step:
_example[sql_cmd_key] = step[sql_cmd_key]
answer_key = sql_result_key # this is SQL execution input
elif isinstance(step, str):
# The preceding element should have set the answer_key
_example[answer_key] = step
return _example
example: any
try:
result = local_chain(QUERY)
print("*** Query succeeded")
example = _parse_example(result)
except Exception as exc:
print("*** Query failed")
result = {
"query": QUERY,
"intermediate_steps": exc.intermediate_steps
}
example = _parse_example(result)
# print for now, in reality you may want to write this out to a YAML file or database for manual fix-ups offline
yaml_example = yaml.dump(example, allow_unicode=True)
print("\n" + yaml_example)
> Entering new SQLDatabaseChain chain...
List all the customer first names that start with 'a'
SQLQuery:
/workspace/langchain/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
warnings.warn(
SELECT firstname FROM customer WHERE firstname LIKE '%a%'
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-38
|
warnings.warn(
SELECT firstname FROM customer WHERE firstname LIKE '%a%'
SQLResult: [('François',), ('František',), ('Helena',), ('Astrid',), ('Daan',), ('Kara',), ('Eduardo',), ('Alexandre',), ('Fernanda',), ('Mark',), ('Frank',), ('Jack',), ('Dan',), ('Kathy',), ('Heather',), ('Frank',), ('Richard',), ('Patrick',), ('Julia',), ('Edward',), ('Martha',), ('Aaron',), ('Madalena',), ('Hannah',), ('Niklas',), ('Camille',), ('Marc',), ('Wyatt',), ('Isabelle',), ('Ladislav',), ('Lucas',), ('Johannes',), ('Stanisław',), ('Joakim',), ('Emma',), ('Mark',), ('Manoj',), ('Puja',)]
Answer:
/workspace/langchain/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
warnings.warn(
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-39
|
warnings.warn(
[('François', 'Frantiek', 'Helena', 'Astrid', 'Daan', 'Kara', 'Eduardo', 'Alexandre', 'Fernanda', 'Mark', 'Frank', 'Jack', 'Dan', 'Kathy', 'Heather', 'Frank', 'Richard', 'Patrick', 'Julia', 'Edward', 'Martha', 'Aaron', 'Madalena', 'Hannah', 'Niklas', 'Camille', 'Marc', 'Wyatt', 'Isabelle', 'Ladislav', 'Lucas', 'Johannes', 'Stanisaw', 'Joakim', 'Emma', 'Mark', 'Manoj', 'Puja']
> Finished chain.
*** Query succeeded
answer: '[(''François'', ''Frantiek'', ''Helena'', ''Astrid'', ''Daan'', ''Kara'',
''Eduardo'', ''Alexandre'', ''Fernanda'', ''Mark'', ''Frank'', ''Jack'', ''Dan'',
''Kathy'', ''Heather'', ''Frank'', ''Richard'', ''Patrick'', ''Julia'', ''Edward'',
''Martha'', ''Aaron'', ''Madalena'', ''Hannah'', ''Niklas'', ''Camille'', ''Marc'',
''Wyatt'', ''Isabelle'', ''Ladislav'', ''Lucas'', ''Johannes'', ''Stanisaw'', ''Joakim'',
''Emma'', ''Mark'', ''Manoj'', ''Puja'']'
input: List all the customer first names that start with 'a'
sql_cmd: SELECT firstname FROM customer WHERE firstname LIKE '%a%'
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-40
|
sql_cmd: SELECT firstname FROM customer WHERE firstname LIKE '%a%'
sql_result: '[(''François'',), (''František'',), (''Helena'',), (''Astrid'',), (''Daan'',),
(''Kara'',), (''Eduardo'',), (''Alexandre'',), (''Fernanda'',), (''Mark'',), (''Frank'',),
(''Jack'',), (''Dan'',), (''Kathy'',), (''Heather'',), (''Frank'',), (''Richard'',),
(''Patrick'',), (''Julia'',), (''Edward'',), (''Martha'',), (''Aaron'',), (''Madalena'',),
(''Hannah'',), (''Niklas'',), (''Camille'',), (''Marc'',), (''Wyatt'',), (''Isabelle'',),
(''Ladislav'',), (''Lucas'',), (''Johannes'',), (''Stanisław'',), (''Joakim'',),
(''Emma'',), (''Mark'',), (''Manoj'',), (''Puja'',)]'
table_info: "\nCREATE TABLE \"Customer\" (\n\t\"CustomerId\" INTEGER NOT NULL, \n\t\
\"FirstName\" NVARCHAR(40) NOT NULL, \n\t\"LastName\" NVARCHAR(20) NOT NULL, \n\t\
\"Company\" NVARCHAR(80), \n\t\"Address\" NVARCHAR(70), \n\t\"City\" NVARCHAR(40),\
\ \n\t\"State\" NVARCHAR(40), \n\t\"Country\" NVARCHAR(40), \n\t\"PostalCode\" NVARCHAR(10),\
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-41
|
\ \n\t\"Phone\" NVARCHAR(24), \n\t\"Fax\" NVARCHAR(24), \n\t\"Email\" NVARCHAR(60)\
\ NOT NULL, \n\t\"SupportRepId\" INTEGER, \n\tPRIMARY KEY (\"CustomerId\"), \n\t\
FOREIGN KEY(\"SupportRepId\") REFERENCES \"Employee\" (\"EmployeeId\")\n)\n\n/*\n\
3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\t\
City\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\t\
Embraer - Empresa Brasileira de Aeronáutica S.A.\tAv. Brigadeiro Faria Lima, 2170\t\
São José dos Campos\tSP\tBrazil\t12227-000\t+55 (12) 3923-5555\t+55 (12) 3923-5566\t\
luisg@embraer.com.br\t3\n2\tLeonie\tKöhler\tNone\tTheodor-Heuss-Straße 34\tStuttgart\t\
None\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\t\
Tremblay\tNone\t1498 rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\t\
None\tftremblay@gmail.com\t3\n*/"
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-42
|
None\tftremblay@gmail.com\t3\n*/"
Run the snippet above a few times, or log exceptions in your deployed environment, to collect lots of examples of inputs, table_info and sql_cmd generated by your language model. The sql_cmd values will be incorrect and you can manually fix them up to build a collection of examples, e.g. here we are using YAML to keep a neat record of our inputs and corrected SQL output that we can build up over time.
YAML_EXAMPLES = """
- input: How many customers are not from Brazil?
table_info: |
CREATE TABLE "Customer" (
"CustomerId" INTEGER NOT NULL,
"FirstName" NVARCHAR(40) NOT NULL,
"LastName" NVARCHAR(20) NOT NULL,
"Company" NVARCHAR(80),
"Address" NVARCHAR(70),
"City" NVARCHAR(40),
"State" NVARCHAR(40),
"Country" NVARCHAR(40),
"PostalCode" NVARCHAR(10),
"Phone" NVARCHAR(24),
"Fax" NVARCHAR(24),
"Email" NVARCHAR(60) NOT NULL,
"SupportRepId" INTEGER,
PRIMARY KEY ("CustomerId"),
FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")
)
sql_cmd: SELECT COUNT(*) FROM "Customer" WHERE NOT "Country" = "Brazil";
sql_result: "[(54,)]"
answer: 54 customers are not from Brazil.
- input: list all the genres that start with 'r'
table_info: |
CREATE TABLE "Genre" (
"GenreId" INTEGER NOT NULL,
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-43
|
CREATE TABLE "Genre" (
"GenreId" INTEGER NOT NULL,
"Name" NVARCHAR(120),
PRIMARY KEY ("GenreId")
)
/*
3 rows from Genre table:
GenreId Name
1 Rock
2 Jazz
3 Metal
*/
sql_cmd: SELECT "Name" FROM "Genre" WHERE "Name" LIKE 'r%';
sql_result: "[('Rock',), ('Rock and Roll',), ('Reggae',), ('R&B/Soul',)]"
answer: The genres that start with 'r' are Rock, Rock and Roll, Reggae and R&B/Soul.
"""
Now that you have some examples (with manually corrected output SQL), you can do few shot prompt seeding the usual way:
from langchain import FewShotPromptTemplate, PromptTemplate
from langchain.chains.sql_database.prompt import _sqlite_prompt, PROMPT_SUFFIX
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
example_prompt = PromptTemplate(
input_variables=["table_info", "input", "sql_cmd", "sql_result", "answer"],
template="{table_info}\n\nQuestion: {input}\nSQLQuery: {sql_cmd}\nSQLResult: {sql_result}\nAnswer: {answer}",
)
examples_dict = yaml.safe_load(YAML_EXAMPLES)
local_embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples_dict,
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-44
|
# This is the list of examples available to select from.
examples_dict,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
local_embeddings,
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma, # type: ignore
# This is the number of examples to produce and include per prompt
k=min(3, len(examples_dict)),
)
few_shot_prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
prefix=_sqlite_prompt + "Here are some examples:",
suffix=PROMPT_SUFFIX,
input_variables=["table_info", "input", "top_k"],
)
Using embedded DuckDB without persistence: data will be transient
The model should do better now with this few shot prompt, especially for inputs similar to the examples you have seeded it with.
local_chain = SQLDatabaseChain.from_llm(local_llm, db, prompt=few_shot_prompt, use_query_checker=True, verbose=True, return_intermediate_steps=True)
result = local_chain("How many customers are from Brazil?")
> Entering new SQLDatabaseChain chain...
How many customers are from Brazil?
SQLQuery:SELECT count(*) FROM Customer WHERE Country = "Brazil";
SQLResult: [(5,)]
Answer:[5]
> Finished chain.
result = local_chain("How many customers are not from Brazil?")
> Entering new SQLDatabaseChain chain...
How many customers are not from Brazil?
SQLQuery:SELECT count(*) FROM customer WHERE country NOT IN (SELECT country FROM customer WHERE country = 'Brazil')
SQLResult: [(54,)]
Answer:54 customers are not from Brazil.
> Finished chain.
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
be97fb0da2cf-45
|
Answer:54 customers are not from Brazil.
> Finished chain.
result = local_chain("How many customers are there in total?")
> Entering new SQLDatabaseChain chain...
How many customers are there in total?
SQLQuery:SELECT count(*) FROM Customer;
SQLResult: [(59,)]
Answer:There are 59 customers in total.
> Finished chain.
previous
PAL
next
Chains
Contents
Use Query Checker
Customize Prompt
Return Intermediate Steps
Choosing how to limit the number of rows returned
Adding example rows from each table
Custom Table Info
SQLDatabaseSequentialChain
Using Local Language Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html
|
fbec15332638-0
|
.ipynb
.pdf
Router Chains: Selecting from multiple prompts with MultiRetrievalQAChain
Router Chains: Selecting from multiple prompts with MultiRetrievalQAChain#
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
from langchain.chains.router import MultiRetrievalQAChain
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
sou_docs = TextLoader('../../state_of_the_union.txt').load_and_split()
sou_retriever = FAISS.from_documents(sou_docs, OpenAIEmbeddings()).as_retriever()
pg_docs = TextLoader('../../paul_graham_essay.txt').load_and_split()
pg_retriever = FAISS.from_documents(pg_docs, OpenAIEmbeddings()).as_retriever()
personal_texts = [
"I love apple pie",
"My favorite color is fuchsia",
"My dream is to become a professional dancer",
"I broke my arm when I was 12",
"My parents are from Peru",
]
personal_retriever = FAISS.from_texts(personal_texts, OpenAIEmbeddings()).as_retriever()
retriever_infos = [
{
"name": "state of the union",
"description": "Good for answering questions about the 2023 State of the Union address",
"retriever": sou_retriever
},
{
|
https://python.langchain.com/en/latest/modules/chains/examples/multi_retrieval_qa_router.html
|
fbec15332638-1
|
"retriever": sou_retriever
},
{
"name": "pg essay",
"description": "Good for answer quesitons about Paul Graham's essay on his career",
"retriever": pg_retriever
},
{
"name": "personal",
"description": "Good for answering questions about me",
"retriever": personal_retriever
}
]
chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)
print(chain.run("What did the president say about the economy?"))
> Entering new MultiRetrievalQAChain chain...
state of the union: {'query': 'What did the president say about the economy in the 2023 State of the Union address?'}
> Finished chain.
The president said that the economy was stronger than it had been a year prior, and that the American Rescue Plan helped create record job growth and fuel economic relief for millions of Americans. He also proposed a plan to fight inflation and lower costs for families, including cutting the cost of prescription drugs and energy, providing investments and tax credits for energy efficiency, and increasing access to child care and Pre-K.
print(chain.run("What is something Paul Graham regrets about his work?"))
> Entering new MultiRetrievalQAChain chain...
pg essay: {'query': 'What is something Paul Graham regrets about his work?'}
> Finished chain.
Paul Graham regrets that he did not take a vacation after selling his company, instead of immediately starting to paint.
print(chain.run("What is my background?"))
> Entering new MultiRetrievalQAChain chain...
personal: {'query': 'What is my background?'}
> Finished chain.
Your background is Peruvian.
|
https://python.langchain.com/en/latest/modules/chains/examples/multi_retrieval_qa_router.html
|
fbec15332638-2
|
> Finished chain.
Your background is Peruvian.
print(chain.run("What year was the Internet created in?"))
> Entering new MultiRetrievalQAChain chain...
None: {'query': 'What year was the Internet created in?'}
> Finished chain.
The Internet was created in 1969 through a project called ARPANET, which was funded by the United States Department of Defense. However, the World Wide Web, which is often confused with the Internet, was created in 1989 by British computer scientist Tim Berners-Lee.
previous
Router Chains: Selecting from multiple prompts with MultiPromptChain
next
OpenAPI Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/multi_retrieval_qa_router.html
|
e518ffdcf264-0
|
.ipynb
.pdf
LLMCheckerChain
LLMCheckerChain#
This notebook showcases how to use LLMCheckerChain.
from langchain.chains import LLMCheckerChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.7)
text = "What type of mammal lays the biggest eggs?"
checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)
checker_chain.run(text)
> Entering new LLMCheckerChain chain...
> Entering new SequentialChain chain...
> Finished chain.
> Finished chain.
' No mammal lays the biggest eggs. The Elephant Bird, which was a species of giant bird, laid the largest eggs of any bird.'
previous
BashChain
next
LLM Math
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_checker.html
|
b76fc6026bf8-0
|
.ipynb
.pdf
Moderation
Contents
How to use the moderation chain
How to append a Moderation chain to an LLMChain
Moderation#
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this notebook.
In this notebook, we will show:
How to run any piece of text through a moderation chain.
How to append a Moderation chain to an LLMChain.
from langchain.llms import OpenAI
from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain
from langchain.prompts import PromptTemplate
How to use the moderation chain#
Here’s an example of using the moderation chain with default settings (will return a string explaining stuff was flagged).
moderation_chain = OpenAIModerationChain()
moderation_chain.run("This is okay")
'This is okay'
moderation_chain.run("I will kill you")
|
https://python.langchain.com/en/latest/modules/chains/examples/moderation.html
|
b76fc6026bf8-1
|
'This is okay'
moderation_chain.run("I will kill you")
"Text was found that violates OpenAI's content policy."
Here’s an example of using the moderation chain to throw an error.
moderation_chain_error = OpenAIModerationChain(error=True)
moderation_chain_error.run("This is okay")
'This is okay'
moderation_chain_error.run("I will kill you")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[7], line 1
----> 1 moderation_chain_error.run("I will kill you")
File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs)
136 if len(args) != 1:
137 raise ValueError("`run` supports only one positional argument.")
--> 138 return self(args[0])[self.output_keys[0]]
140 if kwargs and not args:
141 return self(kwargs)[self.output_keys[0]]
File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs)
108 if self.verbose:
109 print(
110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m"
111 )
--> 112 outputs = self._call(inputs)
113 if self.verbose:
114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m")
File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs)
79 text = inputs[self.input_key]
|
https://python.langchain.com/en/latest/modules/chains/examples/moderation.html
|
b76fc6026bf8-2
|
79 text = inputs[self.input_key]
80 results = self.client.create(text)
---> 81 output = self._moderate(text, results["results"][0])
82 return {self.output_key: output}
File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results)
71 error_str = "Text was found that violates OpenAI's content policy."
72 if self.error:
---> 73 raise ValueError(error_str)
74 else:
75 return error_str
ValueError: Text was found that violates OpenAI's content policy.
Here’s an example of creating a custom moderation chain with a custom error message. It requires some knowledge of OpenAI’s moderation endpoint results (see docs here).
class CustomModeration(OpenAIModerationChain):
def _moderate(self, text: str, results: dict) -> str:
if results["flagged"]:
error_str = f"The following text was found that violates OpenAI's content policy: {text}"
return error_str
return text
custom_moderation = CustomModeration()
custom_moderation.run("This is okay")
'This is okay'
custom_moderation.run("I will kill you")
"The following text was found that violates OpenAI's content policy: I will kill you"
How to append a Moderation chain to an LLMChain#
To easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction.
Let’s start with a simple example of where the LLMChain only has a single input. For this purpose, we will prompt the model so it says something harmful.
|
https://python.langchain.com/en/latest/modules/chains/examples/moderation.html
|
b76fc6026bf8-3
|
prompt = PromptTemplate(template="{text}", input_variables=["text"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
text = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1: I will kill you
Person 2:"""
llm_chain.run(text)
' I will kill you'
chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])
chain.run(text)
"Text was found that violates OpenAI's content policy."
Now let’s walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can’t use the SimpleSequentialChain)
prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
setup = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1:"""
new_input = "I will kill you"
inputs = {"setup": setup, "new_input": new_input}
llm_chain(inputs, return_only_outputs=True)
{'text': ' I will kill you'}
# Setting the input/output keys so it lines up
moderation_chain.input_key = "text"
moderation_chain.output_key = "sanitized_text"
chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])
|
https://python.langchain.com/en/latest/modules/chains/examples/moderation.html
|
b76fc6026bf8-4
|
chain(inputs, return_only_outputs=True)
{'sanitized_text': "Text was found that violates OpenAI's content policy."}
previous
LLMSummarizationCheckerChain
next
Router Chains: Selecting from multiple prompts with MultiPromptChain
Contents
How to use the moderation chain
How to append a Moderation chain to an LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/moderation.html
|
c583e8e38940-0
|
.ipynb
.pdf
FLARE
Contents
Imports
Retriever
FLARE Chain
FLARE#
This notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).
Please see the original repo here.
The basic idea is:
Start answering a question
If you start generating tokens the model is uncertain about, look up relevant documents
Use those documents to continue generating
Repeat until finished
There is a lot of cool detail in how the lookup of relevant documents is done.
Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is Joe Biden went to Harvard, and the tokens the model was uncertain about was Harvard, then a good generated question would be where did Joe Biden go to college. This generated question is then used in a retrieval step to fetch relevant documents.
In order to set up this chain, we will need three things:
An LLM to generate the answer
An LLM to generate hypothetical questions to use in retrieval
A retriever to use to look up answers for
The LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).
The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.
The retriever can be anything. In this notebook we will use SERPER search engine, because it is cheap.
Other important parameters to understand:
max_generation_len: The maximum number of tokens to generate before stopping to check if any are uncertain
min_prob: Any tokens generated with probability below this will be considered uncertain
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-1
|
min_prob: Any tokens generated with probability below this will be considered uncertain
Imports#
import os
os.environ["SERPER_API_KEY"] = ""
import re
import numpy as np
from langchain.schema import BaseRetriever
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.schema import Document
Retriever#
class SerperSearchRetriever(BaseRetriever):
def __init__(self, search):
self.search = search
def get_relevant_documents(self, query: str):
return [Document(page_content=self.search.run(query))]
async def aget_relevant_documents(self, query: str):
raise NotImplemented
retriever = SerperSearchRetriever(GoogleSerperAPIWrapper())
FLARE Chain#
# We set this so we can see what exactly is going on
import langchain
langchain.verbose = True
from langchain.chains import FlareChain
flare = FlareChain.from_llm(
ChatOpenAI(temperature=0),
retriever=retriever,
max_generation_len=164,
min_prob=.3,
)
query = "explain in great detail the difference between the langchain framework and baby agi"
flare.run(query)
> Entering new FlareChain chain...
Current Response:
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
>>> CONTEXT:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> RESPONSE:
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-2
|
>>> RESPONSE:
> Entering new QuestionGeneratorChain chain...
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-3
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " uses a blockchain" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " distributed ledger to" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-4
|
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-5
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " set of tools" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " help developers create" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-6
|
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " create an AI system" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-7
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " NLP applications" is:
> Finished chain.
Generated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?']
> Entering new _OpenAIResponseChain chain...
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-8
|
>>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ...
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-9
|
LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-10
|
LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-11
|
Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-12
|
LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ...
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-13
|
LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-14
|
At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> RESPONSE:
> Finished chain.
> Finished chain.
' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. '
llm = OpenAI()
llm(query)
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-15
|
llm = OpenAI()
llm(query)
'\n\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains”. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\n\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\n\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'
flare.run("how are the origin stories of langchain and bitcoin similar or different?")
> Entering new FlareChain chain...
Current Response:
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
>>> CONTEXT:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> RESPONSE:
> Entering new QuestionGeneratorChain chain...
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-16
|
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
The question to which the answer is the term/entity/phrase " very different origin" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
The question to which the answer is the term/entity/phrase " 2020 by a" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-17
|
FINISHED
The question to which the answer is the term/entity/phrase " developers as a platform for creating and managing decentralized language learning applications." is:
> Finished chain.
Generated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?']
> Entering new _OpenAIResponseChain chain...
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-18
|
>>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men – meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ...
After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
c583e8e38940-19
|
At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> RESPONSE:
> Finished chain.
> Finished chain.
' The origin stories of LangChain and Bitcoin are quite different. Bitcoin was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency, while LangChain is a framework built around LLMs. '
previous
Self-Critique Chain with Constitutional AI
next
GraphCypherQAChain
Contents
Imports
Retriever
FLARE Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/flare.html
|
25ed10898c13-0
|
.ipynb
.pdf
BashChain
Contents
Customize Prompt
Persistent Terminal
BashChain#
This notebook showcases using LLMs and a bash process to perform simple filesystem commands.
from langchain.chains import LLMBashChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
text = "Please write a bash script that prints 'Hello World' to the console."
bash_chain = LLMBashChain.from_llm(llm, verbose=True)
bash_chain.run(text)
> Entering new LLMBashChain chain...
Please write a bash script that prints 'Hello World' to the console.
```bash
echo "Hello World"
```
Code: ['echo "Hello World"']
Answer: Hello World
> Finished chain.
'Hello World\n'
Customize Prompt#
You can also customize the prompt that is used. Here is an example prompting to avoid using the ‘echo’ utility
from langchain.prompts.prompt import PromptTemplate
from langchain.chains.llm_bash.prompt import BashOutputParser
_PROMPT_TEMPLATE = """If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:
Question: "copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'"
I need to take the following actions:
- List all files in the directory
- Create a new directory
- Copy the files from the first directory into the second directory
```bash
ls
mkdir myNewDirectory
cp -r target/* myNewDirectory
```
Do not use 'echo' when writing the script.
That is the format. Begin!
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_bash.html
|
25ed10898c13-1
|
Do not use 'echo' when writing the script.
That is the format. Begin!
Question: {question}"""
PROMPT = PromptTemplate(input_variables=["question"], template=_PROMPT_TEMPLATE, output_parser=BashOutputParser())
bash_chain = LLMBashChain.from_llm(llm, prompt=PROMPT, verbose=True)
text = "Please write a bash script that prints 'Hello World' to the console."
bash_chain.run(text)
> Entering new LLMBashChain chain...
Please write a bash script that prints 'Hello World' to the console.
```bash
printf "Hello World\n"
```
Code: ['printf "Hello World\\n"']
Answer: Hello World
> Finished chain.
'Hello World\n'
Persistent Terminal#
By default, the chain will run in a separate subprocess each time it is called. This behavior can be changed by instantiating with a persistent bash process.
from langchain.utilities.bash import BashProcess
persistent_process = BashProcess(persistent=True)
bash_chain = LLMBashChain.from_llm(llm, bash_process=persistent_process, verbose=True)
text = "List the current directory then move up a level."
bash_chain.run(text)
> Entering new LLMBashChain chain...
List the current directory then move up a level.
```bash
ls
cd ..
```
Code: ['ls', 'cd ..']
Answer: api.ipynb llm_summarization_checker.ipynb
constitutional_chain.ipynb moderation.ipynb
llm_bash.ipynb openai_openapi.yaml
llm_checker.ipynb openapi.ipynb
llm_math.ipynb pal.ipynb
llm_requests.ipynb sqlite.ipynb
> Finished chain.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_bash.html
|
25ed10898c13-2
|
llm_requests.ipynb sqlite.ipynb
> Finished chain.
'api.ipynb\t\t\tllm_summarization_checker.ipynb\r\nconstitutional_chain.ipynb\tmoderation.ipynb\r\nllm_bash.ipynb\t\t\topenai_openapi.yaml\r\nllm_checker.ipynb\t\topenapi.ipynb\r\nllm_math.ipynb\t\t\tpal.ipynb\r\nllm_requests.ipynb\t\tsqlite.ipynb'
# Run the same command again and see that the state is maintained between calls
bash_chain.run(text)
> Entering new LLMBashChain chain...
List the current directory then move up a level.
```bash
ls
cd ..
```
Code: ['ls', 'cd ..']
Answer: examples getting_started.ipynb index_examples
generic how_to_guides.rst
> Finished chain.
'examples\t\tgetting_started.ipynb\tindex_examples\r\ngeneric\t\t\thow_to_guides.rst'
previous
GraphCypherQAChain
next
LLMCheckerChain
Contents
Customize Prompt
Persistent Terminal
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_bash.html
|
ed161ff99290-0
|
.ipynb
.pdf
LLMRequestsChain
LLMRequestsChain#
Using the request library to get HTML results from a URL and then an LLM to parse results
from langchain.llms import OpenAI
from langchain.chains import LLMRequestsChain, LLMChain
from langchain.prompts import PromptTemplate
template = """Between >>> and <<< are the raw search result text from google.
Extract the answer to the question '{query}' or say "not found" if the information is not contained.
Use the format
Extracted:<answer or "not found">
>>> {requests_result} <<<
Extracted:"""
PROMPT = PromptTemplate(
input_variables=["query", "requests_result"],
template=template,
)
chain = LLMRequestsChain(llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))
question = "What are the Three (3) biggest countries, and their respective sizes?"
inputs = {
"query": question,
"url": "https://www.google.com/search?q=" + question.replace(" ", "+")
}
chain(inputs)
{'query': 'What are the Three (3) biggest countries, and their respective sizes?',
'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?',
'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), United States (9,826,675 km²)'}
previous
LLM Math
next
LLMSummarizationCheckerChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_requests.html
|
fce77a58dd66-0
|
.ipynb
.pdf
LLMSummarizationCheckerChain
LLMSummarizationCheckerChain#
This notebook shows some examples of LLMSummarizationCheckerChain in use with different types of texts. It has a few distinct differences from the LLMCheckerChain, in that it doesn’t have any assumptions to the format of the input text (or summary).
Additionally, as the LLMs like to hallucinate when fact checking or get confused by context, it is sometimes beneficial to run the checker multiple times. It does this by feeding the rewritten “True” result back on itself, and checking the “facts” for truth. As you can see from the examples below, this can be very effective in arriving at a generally true body of text.
You can control the number of times the checker runs by setting the max_checks parameter. The default is 2, but you can set it to 1 if you don’t want any double-checking.
from langchain.chains import LLMSummarizationCheckerChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=2)
text = """
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside."
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-1
|
These discoveries can spark a child's imagination about the infinite wonders of the universe."""
checker_chain.run(text)
> Entering new LLMSummarizationCheckerChain chain...
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside."
These discoveries can spark a child's imagination about the infinite wonders of the universe.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas."
• The telescope captured images of galaxies that are over 13 billion years old.
• JWST took the very first pictures of a planet outside of our own solar system.
• These distant worlds are called "exoplanets."
"""
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-2
|
• These distant worlds are called "exoplanets."
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True
• The telescope captured images of galaxies that are over 13 billion years old. - True
• JWST took the very first pictures of a planet outside of our own solar system. - False. The first exoplanet was discovered in 1992, before the JWST was launched.
• These distant worlds are called "exoplanets." - True
"""
Original Summary:
"""
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside."
These discoveries can spark a child's imagination about the infinite wonders of the universe.
"""
Using these checked assertions, rewrite the original summary to be completely true.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-3
|
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True
• The telescope captured images of galaxies that are over 13 billion years old. - True
• JWST took the very first pictures of a planet outside of our own solar system. - False. The first exoplanet was discovered in 1992, before the JWST was launched.
• These distant worlds are called "exoplanets." - True
"""
Result:
> Finished chain.
> Finished chain.
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-4
|
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail.
These discoveries can spark a child's imagination about the infinite wonders of the universe.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail.
These discoveries can spark a child's imagination about the infinite wonders of the universe.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-5
|
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas."
• The light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system.
• Exoplanets were first discovered in 1992.
• The JWST has allowed us to see exoplanets in greater detail.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True
• The light from these galaxies has been traveling for over 13 billion years to reach us. - True
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004.
• Exoplanets were first discovered in 1992. - True
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-6
|
• Exoplanets were first discovered in 1992. - True
• The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide.
"""
Original Summary:
"""
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail.
These discoveries can spark a child's imagination about the infinite wonders of the universe.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-7
|
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True
• The light from these galaxies has been traveling for over 13 billion years to reach us. - True
• JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004.
• Exoplanets were first discovered in 1992. - True
• The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide.
"""
Result:
> Finished chain.
> Finished chain.
Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):
• In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.
• The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-8
|
• Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023.
These discoveries can spark a child's imagination about the infinite wonders of the universe.
> Finished chain.
'Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):\n• In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.\n• The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.\n• Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023.\nThese discoveries can spark a child\'s imagination about the infinite wonders of the universe.'
from langchain.chains import LLMSummarizationCheckerChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=3)
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-9
|
text = "The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea."
checker_chain.run(text)
> Entering new LLMSummarizationCheckerChain chain...
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-10
|
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland.
- It has an area of 465,000 square miles.
- It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean.
- It is the smallest of the five oceans.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs.
- The sea is named after the island of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic.
- It is often frozen over so navigation is limited.
- It is considered the northern branch of the Norwegian Sea.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-11
|
- It has an area of 465,000 square miles. True
- It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the island of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. True
"""
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-12
|
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the island of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. True
"""
Result:
> Finished chain.
> Finished chain.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-13
|
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-14
|
- It has an area of 465,000 square miles.
- It is an arm of the Arctic Ocean.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs.
- It is named after the island of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic.
- It is often frozen over so navigation is limited.
- It is considered the northern branch of the Norwegian Sea.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is an arm of the Arctic Ocean. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- It is named after the island of Greenland. False - It is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean.
"""
Original Summary:
"""
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-15
|
"""
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-16
|
- It has an area of 465,000 square miles. True
- It is an arm of the Arctic Ocean. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- It is named after the island of Greenland. False - It is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic. True
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean.
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-17
|
Format your output as a bulleted list.
Text:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland.
- It has an area of 465,000 square miles.
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs.
- The sea is named after the country of Greenland.
- It is the Arctic Ocean's main outlet to the Atlantic.
- It is often frozen over so navigation is limited.
- It is considered the northern branch of the Atlantic Ocean.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-18
|
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the country of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea.
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean.
"""
Original Summary:
"""
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-19
|
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
- It has an area of 465,000 square miles. True
- It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True
- The sea is named after the country of Greenland. True
- It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea.
- It is often frozen over so navigation is limited. True
- It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean.
"""
Result:
> Finished chain.
> Finished chain.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-20
|
"""
Result:
> Finished chain.
> Finished chain.
The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean.
> Finished chain.
"The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean."
from langchain.chains import LLMSummarizationCheckerChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=3, verbose=True)
text = "Mammals can lay eggs, birds can lay eggs, therefore birds are mammals."
checker_chain.run(text)
> Entering new LLMSummarizationCheckerChain chain...
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-21
|
Format your output as a bulleted list.
Text:
"""
Mammals can lay eggs, birds can lay eggs, therefore birds are mammals.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- Mammals can lay eggs
- Birds can lay eggs
- Birds are mammals
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young.
- Birds can lay eggs: True. Birds are capable of laying eggs.
- Birds are mammals: False. Birds are not mammals, they are a class of their own.
"""
Original Summary:
"""
Mammals can lay eggs, birds can lay eggs, therefore birds are mammals.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-22
|
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young.
- Birds can lay eggs: True. Birds are capable of laying eggs.
- Birds are mammals: False. Birds are not mammals, they are a class of their own.
"""
Result:
> Finished chain.
> Finished chain.
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
> Entering new SequentialChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Given some text, extract a list of facts from the text.
Format your output as a bulleted list.
Text:
"""
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
"""
Facts:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-23
|
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.
Here is a bullet point list of facts:
"""
- Birds and mammals are both capable of laying eggs.
- Birds are not mammals.
- Birds are a class of their own.
"""
For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".
If the fact is false, explain why.
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.
Checked Assertions:
"""
- Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs.
- Birds are not mammals: True. Birds are a class of their own, separate from mammals.
- Birds are a class of their own: True. Birds are a class of their own, separate from mammals.
"""
Original Summary:
"""
Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
"""
Using these checked assertions, rewrite the original summary to be completely true.
The output should have the same structure and formatting as the original summary.
Summary:
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
Below are some assertions that have been fact checked and are labeled as true or false.
If all of the assertions are true, return "True". If any of the assertions are false, return "False".
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
fce77a58dd66-24
|
Here are some examples:
===
Checked Assertions: """
- The sky is red: False
- Water is made of lava: False
- The sun is a star: True
"""
Result: False
===
Checked Assertions: """
- The sky is blue: True
- Water is wet: True
- The sun is a star: True
"""
Result: True
===
Checked Assertions: """
- The sky is blue - True
- Water is made of lava- False
- The sun is a star - True
"""
Result: False
===
Checked Assertions:"""
- Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs.
- Birds are not mammals: True. Birds are a class of their own, separate from mammals.
- Birds are a class of their own: True. Birds are a class of their own, separate from mammals.
"""
Result:
> Finished chain.
> Finished chain.
> Finished chain.
'Birds are not mammals, but they are a class of their own. They lay eggs, unlike mammals which give birth to live young.'
previous
LLMRequestsChain
next
Moderation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
|
e29d821d2e03-0
|
.ipynb
.pdf
Summarization
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
Summarization#
This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: stuff, map_reduce, and refine. For a more in depth explanation of what these chain types are, see here.
Prepare Data#
First we prepare the data. For this example we create multiple documents from one long one, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).
from langchain import OpenAI, PromptTemplate, LLMChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0)
text_splitter = CharacterTextSplitter()
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
Quickstart#
If you just want to get started as quickly as possible, this is the recommended way to do it:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-1
|
chain.run(docs)
' In response to Russian aggression in Ukraine, the United States and its allies are taking action to hold Putin accountable, including economic sanctions, asset seizures, and military assistance. The US is also providing economic and humanitarian aid to Ukraine, and has passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and create jobs. The US remains unified and determined to protect Ukraine and the free world.'
If you want more control and understanding over what is happening, please see the information below.
The stuff Chain#
This sections shows results of using the stuff Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="stuff")
chain.run(docs)
' In his speech, President Biden addressed the crisis in Ukraine, the American Rescue Plan, and the Bipartisan Infrastructure Law. He discussed the need to invest in America, educate Americans, and build the economy from the bottom up. He also announced the release of 60 million barrels of oil from reserves around the world, and the creation of a dedicated task force to go after the crimes of Russian oligarchs. He concluded by emphasizing the need to Buy American and use taxpayer dollars to rebuild America.'
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(llm, chain_type="stuff", prompt=PROMPT)
chain.run(docs)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-2
|
chain.run(docs)
"\n\nIn questa serata, il Presidente degli Stati Uniti ha annunciato una serie di misure per affrontare la crisi in Ucraina, causata dall'aggressione di Putin. Ha anche annunciato l'invio di aiuti economici, militari e umanitari all'Ucraina. Ha anche annunciato che gli Stati Uniti e i loro alleati stanno imponendo sanzioni economiche a Putin e stanno rilasciando 60 milioni di barili di petrolio dalle riserve di tutto il mondo. Inoltre, ha annunciato che il Dipartimento di Giustizia degli Stati Uniti sta creando una task force dedicata ai crimini degli oligarchi russi. Il Presidente ha anche annunciato l'approvazione della legge bipartitica sull'infrastruttura, che prevede investimenti per la ricostruzione dell'America. Questo porterà a creare posti"
The map_reduce Chain#
This sections shows results of using the map_reduce Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
" In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and releasing oil from its Strategic Petroleum Reserve. President Biden and Vice President Harris have passed legislation to help struggling families and rebuild America's infrastructure."
Intermediate Steps
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-3
|
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
{'map_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
' The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens.',
" President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs."],
'output_text': " In response to Russia's aggression in Ukraine, the United States and its allies have imposed economic sanctions and are taking other measures to hold Putin accountable. The US is also providing economic and military assistance to Ukraine, protecting NATO countries, and passing legislation to help struggling families and rebuild America's infrastructure. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens."}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-4
|
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Gli Stati Uniti e i loro alleati stanno ora imponendo sanzioni economiche a Putin e stanno tagliando l'accesso della Russia alla tecnologia. Il Dipartimento di Giustizia degli Stati Uniti sta anche creando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-5
|
"\n\nStiamo unendo le nostre forze con quelle dei nostri alleati europei per sequestrare yacht, appartamenti di lusso e jet privati di Putin. Abbiamo chiuso lo spazio aereo americano ai voli russi e stiamo fornendo più di un miliardo di dollari in assistenza all'Ucraina. Abbiamo anche mobilitato le nostre forze terrestri, aeree e navali per proteggere i paesi della NATO. Abbiamo anche rilasciato 60 milioni di barili di petrolio dalle riserve di tutto il mondo, di cui 30 milioni dalla nostra riserva strategica di petrolio. Stiamo affrontando una prova reale e ci vorrà del tempo, ma alla fine Putin non riuscirà a spegnere l'amore dei popoli per la libertà.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-6
|
"\n\nIl Presidente Biden ha lottato per passare l'American Rescue Plan per aiutare le persone che soffrivano a causa della pandemia. Il piano ha fornito sollievo economico immediato a milioni di americani, ha aiutato a mettere cibo sulla loro tavola, a mantenere un tetto sopra le loro teste e a ridurre il costo dell'assicurazione sanitaria. Il piano ha anche creato più di 6,5 milioni di nuovi posti di lavoro, il più alto numero di posti di lavoro creati in un anno nella storia degli Stati Uniti. Il Presidente Biden ha anche firmato la legge bipartitica sull'infrastruttura, la più ampia iniziativa di ricostruzione della storia degli Stati Uniti. Il piano prevede di modernizzare le strade, gli aeroporti, i porti e le vie navigabili in"],
'output_text': "\n\nIl Presidente Biden sta lavorando per aiutare le persone che soffrono a causa della pandemia attraverso l'American Rescue Plan e la legge bipartitica sull'infrastruttura. Gli Stati Uniti e i loro alleati stanno anche imponendo sanzioni economiche a Putin e tagliando l'accesso della Russia alla tecnologia. Stanno anche sequestrando yacht, appartamenti di lusso e jet privati di Putin e fornendo più di un miliardo di dollari in assistenza all'Ucraina. Alla fine, Putin non riuscirà a spegnere l'amore dei popoli per la libertà."}
The refine Chain#
This sections shows results of using the refine Chain to do summarization.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-7
|
The refine Chain#
This sections shows results of using the refine Chain to do summarization.
chain = load_summarize_chain(llm, chain_type="refine")
chain.run(docs)
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This investment will"
Intermediate Steps
We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True)
chain({"input_documents": docs}, return_only_outputs=True)
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-8
|
chain({"input_documents": docs}, return_only_outputs=True)
{'refine_steps': [" In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.",
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. Putin's war on Ukraine has left Russia weaker and the rest of the world stronger, with the world uniting in support of democracy and peace.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-9
|
"\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-10
|
'output_text': "\n\nIn response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. We are joining with our European allies to find and seize the assets of Russian oligarchs, including yachts, luxury apartments, and private jets. The U.S. is also closing off American airspace to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The U.S. and its allies are providing support to the Ukrainians in their fight for freedom, including military, economic, and humanitarian assistance. The U.S. is also mobilizing ground forces, air squadrons, and ship deployments to protect NATO countries. The U.S. and its allies are also releasing 60 million barrels of oil from reserves around the world, with the U.S. contributing 30 million barrels from its own Strategic Petroleum Reserve. In addition, the U.S. has passed the American Rescue Plan to provide immediate economic relief for tens of millions of Americans, and the Bipartisan Infrastructure Law to rebuild America and create jobs. This includes investing"}
Custom Prompts
You can also use your own prompts with this chain. In this example, we will respond in Italian.
prompt_template = """Write a concise summary of the following:
{text}
CONCISE SUMMARY IN ITALIAN:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
refine_template = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-11
|
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary in Italian"
"If the context isn't useful, return the original summary."
)
refine_prompt = PromptTemplate(
input_variables=["existing_answer", "text"],
template=refine_template,
)
chain = load_summarize_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=PROMPT, refine_prompt=refine_prompt)
chain({"input_documents": docs}, return_only_outputs=True)
{'intermediate_steps': ["\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia e bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi.",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-12
|
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare,",
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-13
|
"\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."],
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
e29d821d2e03-14
|
'output_text': "\n\nQuesta sera, ci incontriamo come democratici, repubblicani e indipendenti, ma soprattutto come americani. La Russia di Putin ha cercato di scuotere le fondamenta del mondo libero, ma ha sottovalutato la forza della gente ucraina. Insieme ai nostri alleati, stiamo imponendo sanzioni economiche, tagliando l'accesso della Russia alla tecnologia, bloccando i suoi più grandi istituti bancari dal sistema finanziario internazionale e chiudendo lo spazio aereo americano a tutti i voli russi. Il Dipartimento di Giustizia degli Stati Uniti sta anche assemblando una task force dedicata per andare dopo i crimini degli oligarchi russi. Stiamo fornendo più di un miliardo di dollari in assistenza diretta all'Ucraina e fornendo assistenza militare."}
previous
Question Answering
next
Retrieval Question/Answering
Contents
Prepare Data
Quickstart
The stuff Chain
The map_reduce Chain
The refine Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 02, 2023.
|
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
|
a24347f528db-0
|
.ipynb
.pdf
Chat Over Documents with Chat History
Contents
Pass in chat history
Using a different model for condensing the question
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_chat_history Function
Chat Over Documents with Chat History#
This notebook goes over how to set up a chain to chat over documents with chat history using a ConversationalRetrievalChain. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain.document_loaders import TextLoader
loader = TextLoader("../../state_of_the_union.txt")
documents = loader.load()
If you had multiple loaders that you wanted to combine, you do something like:
# loaders = [....]
# docs = []
# for loader in loaders:
# docs.extend(loader.load())
We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
Using embedded DuckDB without persistence: data will be transient
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-1
|
Using embedded DuckDB without persistence: data will be transient
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
We now initialize the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), memory=memory)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "Did he mention who she suceeded"
result = qa({"question": query})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Pass in chat history#
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result["answer"]
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-2
|
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
result['answer']
' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'
Using a different model for condensing the question#
This chain has two steps. First, it condenses the current question and the chat history into a standalone question. This is neccessary to create a standanlone vector to use for retrieval. After that, it does retrieval and then answers the question using retrieval augmented generation with a separate model. Part of the power of the declarative nature of LangChain is that you can easily use a separate language model for each call. This can be useful to use a cheaper and faster model for the simpler task of condensing the question, and then a more expensive model for answering the question. Here is an example of doing so.
from langchain.chat_models import ChatOpenAI
qa = ConversationalRetrievalChain.from_llm(
ChatOpenAI(temperature=0, model="gpt-4"),
vectorstore.as_retriever(),
condense_question_llm = ChatOpenAI(temperature=0, model='gpt-3.5-turbo'),
)
chat_history = []
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-3
|
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
result['source_documents'][0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../state_of_the_union.txt'})
ConversationalRetrievalChain with search_distance#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-4
|
ConversationalRetrievalChain with search_distance#
If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.
vectordbkwargs = {"search_distance": 0.9}
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})
ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-5
|
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
ConversationalRetrievalChain with Question Answering with sources#
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt"
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example.
from langchain.chains.llm import LLMChain
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
a24347f528db-6
|
from langchain.chains.llm import LLMChain
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
# Construct a ConversationalRetrievalChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0)
streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
qa = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.
get_chat_history Function#
|
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.